source: coopr.pysp/trunk/doc/pysp/pyspbody.tex @ 2162

Last change on this file since 2162 was 2162, checked in by jwatson, 10 years ago

Added documentation for --report-solutions and --report-weights options for runph script.

File size: 62.8 KB
1% pyspbody.tex
2\def\Argmax{\mathop{\rm argmax}}
7\newcommand{\Vector}[1]{\mbox{{\boldmath $#1$}}}
8\newcommand{\Matrix}[1]{\mbox{{\boldmath $#1$}}}
11%\newcommand{\Size}[1]{\| #1 \|}
12\newcommand{\Size}[1]{\mid #1 \mid}
13\newcommand{\Floor}[1]{\lfloor #1 \rfloor}
14\newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil}
18The pysp package extends the pyomo modeling language to support multi-stage stochastic programs with
19enumerated scenarios. Pyomo and pysp are Python version 2.6 programs.  In order to
20specify a program, the user must provide a reference model and a scenario tree.
22Provided and the necessary paths have been communicated to the operating system, the command to execute the pysp package is
23of the form:
27It is possible,
28and generally necessary, to provide command line arguments. The simplest argument causes the program to output help text:
30runph --help
32but notice that there are two dashes before the word ``help.'' Command line arguments are summarized in Section~\ref{cmdargsec}.
34The underlying algorithm in pysp
35is based on Progressive Hedging (PH) \cite{RockafellarWets}, which decomposes the problem into sub-problems, one for
36each scenario.
37The algorithm progressively computes {\em weights} corresponding to each variable to force convergence
38and also makes use of a {\em proximal} term that provides a penalty for the squared deviation from the
39mean solution from the last PH iteration.
41\subsection{Reference Model}
43The reference model describes the problem for a canonical scenario. It does not make use of, or describe, a scenario index or any information about uncertainty. Typically, it is just the model that would be used if there were only a single scenario. It is given as a pyomo file. Data from an arbitrary scenario is needed to instantiate.
45The objective function needs to be separated by stages. The term for each stage should be ``assigned'' (i.e., constrained to be equal to)
46a variable. These variables names are reported in ScenarioStructure.dat so that they can be used for reporting purposes.
48\subsection{Scenario Tree}
50The scenario tree provides information about the time stages and the nature of the uncertainties. In order to specify a tree, we must
51indicate the time stages at which information becomes available. We also specify the nodes of a tree to indicate which variables
52are associated with which realization at each stage. The data for each scenario is provided in separate data files, one for each scenario.
54\section{File Structure}
57\item  (A pyomo model file)
58\item ReferenceModel.dat (data for an arbitrary scenario)
59\item ScenarioStructure.dat (among other things: the scenario names: Sname)
60\item *Sname.dat   (full data for now) one file for each scenario
63In this list we use ``Sname'' as the generic scenario name. The file scenariostructure.dat gives the names of all the scenarios and for each scenario there is a data file with the same name and the suffix ``.dat'' that contains the full specification of data for the scenario.
67The file ScenarioStucture.dat contains the following data:
70\item set Scenarios: List of the names of the scenarios. These names will subsequently be used as indexes in this data file and these names will
71also be used as the root file names for the scenario data files (each of these will have a .dat extension) if the
72parameter ScenarioBasedData is set to True, which is the default.
74item set Stages: List of the names of the time stages, which must be given in time order. In the sequel we will use {\sc StageName} to represent a node name used
75as an index.
77\item set Nodes: List of the names of the nodes in the scenario tree. In the sequel we will use {\sc NodeName} to represent a node name used
78as an index.
80\item param NodeStage: A list of pairs of nodes and stages to indicate the stage for each node.
82\item param Parent: A list of node pairs to indicate the parent of each node that has a parent (the root node will not be listed).
84\item set Children[{\sc NodeName}]: For each node that has children, provide the list of children. No sets will be give for leaf nodes.
86\item param ConditionalProbability: For each node in the scenario tree, give the conditional probability. For the root node it must be given as 1 and for
87the children of any node with children, the conditional probabilities must sum to 1.
89\item param ScenarioLeafNode: A list of scenario and node pairs to indicate the leaf node for each scenario.
91\item set StageVariables[{\sc StageName}]: For each stage, list the pyomo model variables associated with that stage.
95to instantiate these sets and parameters is provided by users in the file ScenarioStructure.dat, which can be given in AMPL \cite{ampl} format.
97The default behavior is one file per scenario and each file has the full data for the scenario. An
98alternative is to specify just the data that changes from the root node in one file per tree node.
99To select this option, add the following line to ScenarioStructure.dat:
101\verb|param ScenarioBasedData := False ;|
103This will set it up to want a per-node file, something along the lines of what's in \verb|examples/pysp/farmer/NODEDATA|.
105Advanced users may be interested in seeing the file \verb|coopr/pysp/utils/|, which defines the python sets and parameters needed to describe stochastic elements. This file should not be edited.
107\section{Command Line Arguments \label{cmdargsec}}
109The basic PH algorithm is controlled by parameters that are set as command line arguments. Note that options begin with a double dash.
111  \item \verb|-h|, \verb|--help|\\            Show help message and exit.
112  \item \verb|--verbose|\\             Generate verbose output for both initialization and execution. Default is False.
113  \item \verb|--report-solutions|\\     Always report PH solutions after each iteration. Enabled if --verbose is enabled. Default is False.
114  \item \verb|--report-weights|\\    Always report PH weights prior to each iteration. Enabled if --verbose is enabled. Default is False.
115  \item \verb|--model-directory|=MODEL\_DIRECTORY\\
116                        The directory in which all model (reference and scenario) definitions are stored. I.e., the ``.py'' files. Default is ".".
117  \item \verb|--instance-directory|=INSTANCE\_DIRECTORY\\
118                        The directory in which all instance (reference and scenario) definitions are stored. I.e., the ``.dat'' files. Default is ".".
119  \item \verb|--solver|=SOLVER\_TYPE\\  The type of solver used to solve scenario sub-problems. Default is cplex.
120  \item \verb|--solver-manager|=SOLVER\_MANAGER\_TYPE\\
121                        The type of solver manager used to coordinate scenario sub-problem solves. Default is serial. This option is changed in parallel applications
122as described in Section~\ref{parallelsec}.
123  \item \verb|--max-iterations|=MAX\_ITERATIONS\\
124                        The maximal number of PH iterations. Default is 100.
125  \item \verb|--default-rho|=DEFAULT\_RHO\\
126                        The default (global) rho for all blended variables. Default is 1.
127  \item \verb|--rho-cfgfile|=RHO\_CFGFILE\\
128                        The name of a configuration script to compute PH rho values. Default is None.
129  \item \verb|--enable-termdiff|-convergence\\
130                        Terminate PH based on the termdiff convergence metric. The convergcne metric is the unscaled sum of differences between variable values and the mean. Default is True.
131  \item \verb|--enable-normalized|-termdiff-convergence\\
132                        Terminate PH based on the normalized termdiff
133                        convergence metric. Each term in the termdiff sum is normalized by the average value (NOTE: it is NOT normalized by the number of scenarios). Default is False.
134  \item \verb|--termdiff-threshold|=TERMDIFF\_THRESHOLD\\
135                        The convergence threshold used in the term-diff and
136                        normalized term-diff convergence criteria. Default is
137                        0.01, which is too low for most problems.
138  \item \verb|--enable-free-discrete-count-convergence|\\
139                        Terminate PH based on the free discrete variable count convergence metric. Default is False.
140  \item \verb|--free-discrete-count-threshold|=FREE\_DISCRETE\_COUNT\_THRESHOLD\\
141                        The convergence threshold used in the criterion based on when the free discrete variable count convergence criterion. Default is 20.
142  \item \verb|--enable-ww-extensions|\\
143                        Enable the Watson-Woodruff PH extensions plugin. Default is False.
144  \item \verb|--ww-extension-cfgfile|=WW\_EXTENSION\_CFGFILE\\
145                        The name of a configuration file for the Watson-Woodruff PH extensions plugin. Default is wwph.cfg.
146  \item \verb|--ww-extension-suffixfile|=WW\_EXTENSION\_SUFFIXFILE\\
147                        The name of a variable suffix file for the Watson-Woodruff PH extensions plugin. Default is wwph.suffixes.
148   \item \verb|--user-defined-extension|=EXTENSIONFILE. Here, "EXTENSIONFILE" is the module name, which is in either the current directory (most likely) or somewhere on your PYTHONPATH. A simple example is "testphextension" plugin that simply prints a message to the screen for each callback. The file can be found in the sources directory and is shown in Section~\ref{ExtensionDetailsSec}. A test of this would be to specify "-user-defined-extension=testphextension", assuming is in your PYTHONPATH or current directory.
150Note that both PH extensions (WW PH and your own) can co-exist; however, the WW plugin will be invoked first.
152\item \verb|--scenario-solver-options| The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the sub-problem solver.
153 Whatever options specified are persistent across all solves. 
154\item \verb|--ef-solver-options| The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the EF problem solver.
155  \item \verb|--write-ef|\\            Upon termination, write the extensive form of the model - accounting for all fixed variables.
156  \item \verb|--solve-ef|\\            Following write of the extensive form model, solve it.
157  \item \verb|--ef-output-file|=EF\_OUTPUT\_FILE\\
158                        The name of the extensive form output file (currently only LP format is supported), if writing of the extensive form is enabled. Default is efout.lp.
159  \item \verb|--suppress-continuous-variable-output|\\
160                        Eliminate PH-related output involving continuous variables. Default: no output.
161  \item \verb|--keep-solver-files|\\   Retain temporary input and output files for scenario sub-problem solves.  Default: files not kept.
162  \item \verb|--output-solver-logs|\\  Output solver logs during scenario sub-problem solves. Default: no output.
163  \item \verb|--output-ef-solver-log|\\
164                        Output solver log during the extensive form solve. Default: no output.
165  \item \verb|--output-solver-results|\\
166                        Output solutions obtained after each scenario sub-problem solve. Default: no output.
167  \item \verb|--output-times|\\        Output timing statistics for various PH components. Default: no output.
168  \item \verb|--disable-warmstarts|\\
169                  Disable warm-start of scenario sub-problem solves in PH iterations >= 1.
170                  Default=False (i.e., warm starts are the default).
171  \item \verb|--drop-proximal-terms|\\
172                  Eliminate proximal terms (i.e., the quadratic penalty terms) from the weighted PH objective.
173                  Default=False (i.e., but default, the proximal terms are included).
174  \item \verb|--retain-quadratic-binary-terms|\\
175                  Do not linearize PH objective terms involving binary decision variables.
176                  Default=False (i.e., the proximal term for binary variables is linearized by default; this can have some impact on the relaxations during the branch and bound solution process).
177  \item \verb|--linearize-nonbinary-penalty-terms|=BPTS\\
178                  Approximate the PH quadratic term for non-binary variables with a piece-wise linear function. The argument
179                  BPTS gives the number of breakpoints in the linear approximation. The default=0. Reasonable non-zero
180                  values are usually in the range of 3 to 7. Note that if a breakpoint would be very close to a variable
181                  bound, then the break point is ommited. IMPORTANT: this option requires that all variables have bounds that
182                  are either established in the reference model or by code specfied using the bounds-cfgfile command line option. See Section~\ref{LinearSec}
183                  for more information about linearizing the proximal term.
184  \item \verb|--breakpoint-strategy|=BREAKPOINT\_STRATEGY
185                        Specify the strategy to distribute breakpoints on the
186                        [lb, ub] interval of each variable when linearizing. 0
187                        indicates uniform distribution. 1 indicates
188                        breakpoints at the node min and max, uniformly in-
189                        between. 2 indicates more aggressive concentration of
190                        breakpoints near the observed node min/max.
191  \item \verb|--bounds-cfgfile|=BOUNDS\_CFGFILE\\
192                  The argument BOUNDS\_CFGFILE specifies the name of an executable pyomo file that sets bounds. The devault is that there is no file.
193                  When specified, the code in this file is executed after the initialization of scenario data so the
194                  bounds can be based on data from all scenarios. The config subdirectory of the farmer example contains a simple example of
195                  such a file (boundsetter.cfg).
196  \item \verb|--checkpoint-interval|\\
197                  The number of iterations between writing of a checkpoint file. Default is 0, indicating never.
198  \item \verb|--restore-from-checkpoint|\\
199                  The name of the checkpoint file from which PH should be initialized. Default is not to restore from a checkpoint.
200  \item \verb|--profile=PROFILE|\\     Enable profiling of Python code.  The value of this option is the number of functions that are summarized. The default is no profiling.
201  \item \verb|--enable-gc|\\           Enable the python garbage collecter. The default is no garbage collection.
204\section{Extensions via Callbacks \label{CallbackSec}}
206Basic PH can converge slowly, so it is usually advisable to extend it or modify it. In pysp, this is done via the pyomo plug-in
207mechanism. The basic PH implementation provides callbacks that enable access to the data structures used by the algorithm.
208In \S\ref{WWExtensionSec} we describe extensions that are provided with the release. In \S\ref{ExtensionDetailsSec},
209we provide information to power users who may wish to modify or replace the extensions.
211\subsection{Watson and Woodruff Extensions \label{WWExtensionSec}}
213Watson and Woodruff describe innovations for accelerating PH \cite{phinnovate}, most of which are generalized and
214implemented in the
215file \verb||, but users generally do not need to know this file name. To invoke the program
216with these additional features, invoke the software with a command of the form:
218runph --enable-ww-extensions
220Many of the examples described in \S\ref{ExampleSec} use this plug-in. The main concept
221is that some integer variables should be fixed as the algorithm progresses for two reasons:
224\item Convergence detection:
225A detailed analysis of PH algorithm behavior on a variety of problem indicates that individual decision variables
226frequently converge to specific,
227fixed values scenarios in early PH iterations. Further, despite interactions among the
228the variables, the value frequently does not change in subsequent PH
229iterations. Such variable ``fixing'' behaviors lead to a potentially powerful, albeit obvious, heuristic: once
230a particular variable has been the same in all scenarios for some number of iterations, fix it to that value.
231For problems where the constraints effectively limit $x$ 
232from both sides, these methods may result in PH encountering infeasible scenario sub-problems even though the
233problem is ultimately feasible.
234\item Cycle detection: When there are integer variables, cycling is sometimes
235encountered, consequently, cycle detection and avoidance mechanisms are required to force
236eventual convergence of the PH algorithm in the mixed-integer case. To detect cycles, we focus on
237repeated occurrences of the weights, implemented using a simple hashing scheme \cite{tabuhash} to
238minimize impact on run-time. Once a cycle in the weight vectors associated with any decision variable is detected,
239the value of that variable is fixed.
242Fixing variables aggressively can result in shorter solution times, but can also result in solutions that are not as good. Furthermore, for some problems, aggressive fixing can result in infeasible sub-problems even though the problem is
243ultimately feasible. Many of the parameters discussed in the next subsections control fixing of variables. This is
244discussed in a tutorial in section~\ref{WWTutorialSec}.
246\subsubsection{Variable Specific Parameters}
248The plug-in makes use of parameters to control behavior at the variable level. Global defaults (to override the defaults stated here) should be set using methods described in \S\ref{ParmSec}. Values for each variable should be set using methods described in \S\ref{SuffixSec}.
249Note that for variable fixing based on convergence detection, iteration zero is treated separately. The parameters
250are as follows:
253\item fix\_continuous\_variables: True or False. If true, fixing applies to all variables. If false, then fixing applies only to discrete variables.
254\item Iter0FixIfConvergedAtLB: 1 (True) or 0 (False). If 1, then discrete variables that are at their lower bound in all scenarios after
255the iteration zero solves will be fixed at that bound.
256\item Iter0FixIfConvergedAtUB: 1 (True) or 0 (False). If 1, then discrete variables that are at their upper bound in all sce<narios after
257the iteration zero solves will be fixed at that bound.
258\item Iter0FixIfConvergedAtNB: = 1  1 (True) or 0 (False). If 1, then discrete variables that are at the same value in all scenarios after
259the iteration zero solves will be fixed at that value, without regard to whether it is a bound. If this is true, it takes precedence. A value of zero, on the other hand, implies that variables will not be fixed at at a non-bound.
260\item FixWhenItersConvergedAtLB:  The number of consecutive PH iterations that discrete variables must be their lower bound in all scenarios before they
261will be fixed at that bound. A value of zero implies that variables will not be fixed at the bound.
262\item FixWhenItersConvergedAtUB: The number of consecutive PH iterations that discrete variables must be their upper bound in all scenarios before they
263will be fixed at that bound. A value of zero implies that variables will not be fixed at the bound.
264\item FixWhenItersConvergedAtNB: The number of consecutive PH iterations that discrete variables must be at the same, consistent value in all scenarios before they
265will be fixed at that value, without regard to whether it is a bound. If this is true, it takes precedence. A value of zero, on the other hand, implies that variables will not be fixed at at a non-bound.
266\item FixWhenItersConvergedContinuous: The number of consecutive PH iterations that continuous variables must be at the same, consistent value in all scenarios before they
267will be fixed at that value. A value of zero implies that continuous variables will not be fixed.
268\item CanSlamToLB: True or False. If True, then slamming can be to the lower bound for any variable.
269\item CanSlamToMin: True or False. If True, then slamming can be to the minimum across scenarios for any variable.
270\item CanSlamToAnywhere: True or False. If True, then slamming can be to any value.
271\item CanSlamToMax: True or False. If True, then slamming can be to the maximum across scenarios for any variable.
272\item CanSlamToUB: True of False. If True, then slamming can be to the upper bound for any variable.
273\item DisableCycleDetection: True or False. If True, then cycle detection and the associated slamming are completely disabled. This cannot be changed to False on the fly because a value of True at startup causes creation of the cycle detection storage to be bypassed.
276In the event that multiple slam targets are True, then Min and Max trump LB and UB while Anywhere trumps all.
278\subsection{General Parameters}
280The plug-in also makes use of the following parameters, which should be set using methods described in \S\ref{ParmSec}.
282\item Iteration0Mipgap: Gives the mipgap to be sent to the solver for iteration zero solves. The default is zero, which causes nothing to be sent to the solver (i.e., the solver uses its default mipgap).
283\item InitalMipGap: Gives the mipgap to be sent to the solver for iteration one solves. The default is zero, which causes nothing to be sent to the solver (i.e., the solver uses its default mipgap). If not zero, then this gap will be used as the starting point for the mipgap to change on each PH iteration in proportion to the convergence termination criterion so that when the criterion for termination is met
284the mipgap will be at (or near) the parameter value of FinalMipGap. If the InitialMipGap is significantly higher than the Iteration0MipGap parameter, the PH algorithm may
285perform poorly. This is because the iteration k-1 solutions are used to warm start iteration k-1 solves. If the iteration 1 mipgap is much higher than the iteration 0 mipgap, the iteration zero
286solution, although not optimal for the iteration one objective, might be within the mipgap. Default: 0.10.
287\item FinalMipGap: The target for the mipgap when PH has converged. Default: 0.001.
288\item PH\_Iters\_Between\_Cycle\_Slams: controls the number of iterations to wait after a variable is slammed due to
289hash hits that suggest convergence. Zero indicates unlimited slams per cycle. Default: 1.
290\item SlamAfterIter: Iteration number after which one variable every other iteration will be slammed to force convergence. Default: the number of scenarios.
291\item hash\_hit\_len\_to\_slam: Ignore possible cycles for which the only evidence of a cycle is less than this. Also, ignore cycles if any variables have been fixed in the previous hash\_hit\_len\_to\_slam PH iterations. Default:
292the number of scenarios. This default is often not a good choice. For many problems with a lot of scenarios,
293a value like 10 or 20 might be a lot better if rapid convergence is desired.
294\item W\_hash\_history\_len: This obscure parameter controls how far back the code will look to see if there is a possible cycle. Default: max(100, number of scenarios).
297\subsubsection{Setting Parameter Values \label{ParmSec}}
299The parameters of PH and of any callbacks can be changed using the file wwph.cfg, which is executed by the python interpreter after PH has initialized. This is
300a potentially powerful and/or dangerous procedure because it gives an opportunity
301to change anything using the full features of python and pyomo.
303\subsubsection{Setting Suffix Values \label{SuffixSec}}
305Suffixes are set using the data file named \verb|wwph.suffixes| using this syntax:
311where VARSPEC is replaced by a variable specification, SUFFIX is replaced by a suffix name and VALUE is replaced by the value of the suffix for
312that variable or those variables. Here is an example:
315   Delta CanSlamToLB False
316   Gamma[*,Ano1] SlammingPriority 10
317   Gamma[*,Ano2] SlammingPriority 20
318   ...
322\subsection{Callback Details \label{ExtensionDetailsSec}}
324Most users of pysp can skip this subsection.
325A callback class definition named iphextension is in the file and can be used to implement callbacks at a
326variety of points in PH. For example, the method post\_iteration\_0\_solves is called immediately after all iteration zero
327solves, but before averages and weights have been computed while the method post\_iteration\_0 is called after averages and
328weights based on iteration zero have been computed. The file iphextension is in the coopr/pysp directory and is not
329intended to be edited by users.
331The user defines a class derived from SingletonPlugin that implements iphextension. Its name is given to ph as an option. This class will be automatically instantiated by ph.
332It has access to data and methods in the PH class, which are defined in the file An example of such a class is in the
333file named in the pysp example directory.
335A user defined extension file can be incorporated by using the command line option:
336\verb|--user-defined-extension=EXTENSIONFILE|. Here, "EXTENSIONFILE" is the module name, which is in either the current directory (most likely) or somewhere on your PYTHONPATH. A simple example is "testphextension" plugin that simply prints a message to the screen for each callback. The file can be found in the sources directory and given in Section~\ref{CallbackSec}. An easy test of this would be to specify "-user-defined-extension=testphextension" and you should
337note the the ``.py'' file extension is not included on the runph command line.
339Both your own extension and the WWPH extensions can co-exist; however, the WW plugin will be invoked first at each callback point if both are included.
341Here are the callbacks:
343\item post\_ph\_initialization: Called after PH data structures have been intialized but before iteration zero solves.
344\item post\_iteration\_0\_solves: Called after iteration zero solutions and some statistics such as averages have been computed, but before weights are updated.
345\item post\_iteration\_0: Called after all processing for iteration zero is complete.
346\item post\_iteration\_k\_solves: Called after solutions some statistics such as averages have been computed, but before weights are updated for iterations after iteration zero.
347\item post\_iteration\_k: Called after all processing for each iteration after iteration 0 is complete.
348\item post\_ph\_execution: Called execution is complete.
351Users interested in writing their own extensions will probably want to refer to the source file to get a deeper understanding of when the callback occur.
353\section{Examples \label{ExampleSec}}
355A number of examples are provided with pysp.
357\subsection{Farmer Example}
359This two-stage example is composed of models and data for the "Farmer" stochastic program, introduced in Section 1.1 of
360"Introduction to Stochastic Programming" by Birge and
361Louveaux \cite{spbook}.
364\item a single-scenario model for the SP
365\item ReferenceModel.dat: a single-scenario data file for the SP (any scenario will do - used to flush out variable and constraint index sets)
366\item ScenarioStructure.dat: data file defining the scenario tree.
367\item AboveAverageScenario.dat: one of the scenario data files.
368\item BelowAverageScenario.dat: one of the scenario data files.
369\item AverageScenario.dat: one of the scenario data files.
372The command runph executes PH, assuming the ReferenceModel.* and ScenarioStructure.* files are present and correct.
373This example is probably in a directory with a name something like:
379The data is in a subdirectory called scenariodata and the model is in the models subdirectory. To
380invoke PH for this problem, connect to this farmer directory and use the command:
383runph --model-directory=models --instance-directory=scenariodata
386\subsection{Sizes Example}
388This two-stage example is composed of models and data for the "Sizes" stochastic program \cite{sizes,lokwood}, which consists of the following files:
391\item wwph.cfg: replace default algorithm parameter values for the Watson and Woodruff extensions.
392\item wwph.suffixes: sets algorithm parameter values at the variables level for the Watson and Woodruff extensions.
393\item a single-scenario model for the SP
394\item ReferenceModel.dat: a single-scenario data file for the SP (any scenario will do - used to flush out variable and constraint index sets)
395\item ScenarioStructure.dat: data file defining the scenario tree.
396\item Scenario1.dat: one of the scenario data files.
397\item Scenario2.dat: one of the scenario data files.
398\item ...
401This example is probably in a directory with a name something like:
407The data for a three scenario version is in a subdirectory called SIZES3 and a ten scenario
408dataset is in SIZES10.
410To invoke PH for the 10 scenario problem, connect to the sizes directory and use the command:
413runph --model-directory=models --instance-directory=SIZES10
417\subsection{Forestry Example}
419This four-stage example is composed of models and data for the ``forestry'' stochastic program \cite{}, which consists of the following files:
422\item wwph.cfg: replace default algorithm parameter values for the Watson and Woodruff extensions.
423\item wwph.suffixes: sets algorithm parameter values at the variables level for the Watson and Woodruff extensions.
424\item a single-scenario model for the SP
425\item ReferenceModel.dat: a single-scenario data file for the SP (any scenario will do - used to flush out variable and constraint index sets)
426\item ScenarioStructure.dat: data file defining the scenario tree.
427\item Scenario1.dat: one of the scenario data files.
428\item Scenario2.dat: one of the scenario data files.
429\item ...
432This example is probably in a directory with a name something like:
439There are two families of instances: ``Chile'' and ``Davis,'' each with four stages and eighteen scenarios. This is also a small
440two-stage, four scenario instances in the subdirectory DAVIS2STAGE.
442This full 18 scenario problem instance takes too long without the Watson Woodruff extensions, but to
443invoke PH for this problem without them, connect to the forestry directory and use the command:
446 runph --models-directory=models --instancs-directory=chile
448and to run with the extensions, use
450runph --model-directory=models --instance-directory=davis \
451--enable-ww-extensions --ww-extension-cfgfile=config/wwph.cfg \
456\section{Tutorial: Parameters for Watson and Woodruff PH Extensions \label{WWTutorialSec}}
458The parameters for the PH extensions in provide the user with considerable control over
459how and under what conditions variables are fixed during the PH algorithm execution. Often, some variables converge
460to consistent values during early iterations and can be fixed at these values without affecting quality very much and without
461affecting feasibility at all. Also, the algorithm may need to fix some variables during execution in order to break cycles.
462In both cases, guidance from the user concerning which classes of variables can and/or should be fixed under various circumstances can be very helpful.
464The overarching goal is to support industrial and government users who solve the same model repeatedly with different, but
465somewhat similar, data each time. In such settings, it behooves the modeler to consider tradeoffs between speed, solution quality
466and feasibility and create at least one good set of directives and parameters for the PH algorithm. In some cases, a user may want more than
467one set of directives and parameters depending on whether speed of execution or quality of solution are more important. Iteration zero is controlled separately because often the absence of the quadratic term results in faster solves for this
468iteration and fixing variables after the iteration has the maximum possible impact on speedup.
470In order to discuss these issues, we consider an example.
472\subsection{Sizes Example}
474The Sizes example is simple and small. In fact, the instances that we will consider are so small that there is really
475no need to use the PH algorithm since the extensive form can be solved directly in a few minutes. However, it serves as a
476vehicle for introducing the concepts.
478This description and formulation comes from the original paper by Jorjani, Scott and Woodruff \cite{sizes}.
480If demand constraints for the current period are
481based on firm orders but future demands are based on forecasts or
482conjecture, then they should not be treated in the same fashion. We
483must recognize that future demand constraints are stochastic. That is
484to say that they should be modeled as random variables.
485It may not be reasonable or useful to consider the entire demand probability
486distribution functions. It may not be reasonable because there may not be
487sufficient data to estimate an entire distribution. It may not be useful
488because the essence of the stochastics may be captured by specifying a
489small number of representative {\em scenarios}. We assume that scenarios
490are specified by giving a full set of random variable realizations and a
491corresponding probability.
492We index the scenario set, ${\cal L}$, by $\ell$ and
493refer to the probability of occurrence of $\ell$ (or, more accurately, a
494realization ``near'' scenario $\ell$) as $\Prl$.
495We refer to solution systems that satisfy constraints with probability one as
496{\em admissible}.
498In addition to modeling stochastics, we would like to model {\em
499recourse} as well. That is, we would like to model the ability of
500decision makers to make use of new information (e.g., orders) at the
501start of each planning period.  We allow our solution vectors to
502depend on the scenario that is realized, but we do not want to assume
503prescience. We refer to a system of solution vectors as {\em
504implementable} if for all decision times $t$, the solution vector
505elements corresponding to period $1,\ldots,t$ are constant with
506respect to information that becomes available only after stage $t$.
507We refer to the set of implementable solutions as ${\cal N}_{\cal L}$.  It is
508possible to require implementable solutions by adding {\em
509non-anticipatitivity constraints}, but we will instead make use of
510solution procedures that implicitly guarantee implementable solutions.
512In the stochastic, multi-period formulation that follows the objective
513is to minimize expected costs.  We invoke the network equivalence
514given earlier to drop the explicit requirement that $\Vector{x}$ and
515$\Vector{y}$ be integers. Variables and data are subscripted with a
516period index $t$ that takes on values up to $T$. To model the idea
517that sleeves produced in one period can be used as-is or cut in
518subsequent periods, we use $x_{ijt}$ to indicate that sleeves of
519length index $i$ are to be used without cutting in period $t$ if $i=j$
520and with cutting otherwise.  The $\Vector{y}$ vector gives production
521quantities for each length in each period without regard to the period
522in which they will be used (and perhaps cut). The formulation is
523essentially an extension of ILP except that a capacity constraint
524must be added in the multiple period formulation. Holding costs could
525be added, but an additional subscript becomes necessary without the
526benefit of any additional insight. As an aside, note that the addition
527of holding costs would add a large number of continuous variables, but
528no new integers so the impact on computational performance would not
529be catastrophic.
532\min\sum_{\ell\in{\cal L}}\Prl
533     \sum_{t}^{T}\left[\sum_{i=1}^{N}\left(sz_{it\ell}+p_{i}y_{it\ell}\right)
534    +r\sum_{j<i}x_{ijt\ell}\right] \;\;\;\;\mbox{(SMIP)}
536subject to
538\sum_{j \geq i}x_{ijt\ell} & \geq & d_{it\ell} 
539          \;\;\;\; \ell \in {\cal L}, \; i=1,\ldots,N, \; t=1,\ldots,T \label{Dconstr} \\
540\sum_{t'\leq t}\left[\sum_{j \leq i}x_{ijt'\ell} - y_{it'\ell}\right] & \leq & 0
541          \;\;\;\; \ell \in {\cal L}, \; i=1,\ldots,N, \; t=1,\ldots,T \label{Pconstr} \\
542y_{it\ell} - Mz_{it\ell} & \leq & 0
543          \;\;\;\; \ell \in {\cal L}, \;i=1,\ldots,N, \; t=1,\ldots,T \label{Mconstr} \\
544\sum_{i=1}^{N}y_{it\ell} & \leq & c_{t\ell}
545          \;\;\;\; \ell \in {\cal L}, \; t=1,\ldots,T \label{Cconstr} \\
546z_{it\ell} & \in & \{0,1\} 
547          \;\;\;\; \ell \in {\cal L}, \; i=1,\ldots,N, \; t=1,\ldots,T \label{intconstr} \\
548\Vector{x}, \Vector{y}, \Vector{z} & \in & {\cal N}_{\cal L} 
551Bear in mind that solution vector elements corresponding
552to periods two through $T$ are not actually intended for use, they
553are computed just to see the effect that period 1 (the current period)
554decision would have on future optimal behavior. At the start of period 2
555-- or at any other time -- the decision maker would run the model again
556with updated demands for the current period and new scenario estimates.
560Here is the single scenario reference model:
565# This is the two-period version of the SIZES optimization model.
568from coopr.pyomo import *
571# Model
574model = Model()
577# Parameters
580# the number of product sizes.
581model.NumSizes = Param(within=NonNegativeIntegers)
583def product_sizes_rule(model):
584    ans = set()
585    for i in range(1, model.NumSizes()+1):
586        ans.add(i)
587    return ans
589# the set of sizes, labeled 1 through NumSizes.
590model.ProductSizes = Set(initialize=product_sizes_rule)
592# the deterministic demands for product at each size.
593model.DemandsFirstStage = Param(model.ProductSizes, within=NonNegativeIntegers)
594model.DemandsSecondStage = Param(model.ProductSizes, within=NonNegativeIntegers)
596# the unit production cost at each size.
597model.UnitProductionCosts = Param(model.ProductSizes, within=NonNegativeReals)
599# the setup cost for producing any units of size i.
600model.SetupCosts = Param(model.ProductSizes, within=NonNegativeReals)
602# the unit penalty cost of meeting demand for size j with larger size i.
603model.UnitPenaltyCosts = Param(model.ProductSizes, within=NonNegativeReals)
605# the cost to reduce a unit i to a lower unit j.
606model.UnitReductionCost = Param(within=NonNegativeReals)
608# a cap on the overall production within any time stage.
609model.Capacity = Param(within=PositiveReals)
611# a derived set to constrain the NumUnitsCut variable domain.
612def num_units_cut_domain_rule(model):
613   ans = set()
614   for i in range(1,model.NumSizes()+1):
615      for j in range(1, i+1):
616         ans.add((i,j))   
617   return ans
619model.NumUnitsCutDomain = Set(initialize=num_units_cut_domain_rule, dimen=2)
622# Variables
625# are any products at size i produced?
626model.ProduceSizeFirstStage = Var(model.ProductSizes, domain=Boolean)
627model.ProduceSizeSecondStage = Var(model.ProductSizes, domain=Boolean)
629# NOTE: The following (num-produced and num-cut) variables are implicitly integer
630#       under the normal cost objective, but with the PH cost objective, this isn't
631#       the case.
633# the number of units at each size produced.
634model.NumProducedFirstStage = Var(model.ProductSizes, domain=NonNegativeIntegers)
635model.NumProducedSecondStage = Var(model.ProductSizes, domain=NonNegativeIntegers)
637# the number of units of size i cut (down) to meet demand for units of size j.
638model.NumUnitsCutFirstStage = Var(model.NumUnitsCutDomain, domain=NonNegativeIntegers)
639model.NumUnitsCutSecondStage = Var(model.NumUnitsCutDomain, domain=NonNegativeIntegers)
641# stage-specific cost variables for use in the pysp scenario tree / analysis.
642model.FirstStageCost = Var(domain=NonNegativeReals)
643model.SecondStageCost = Var(domain=NonNegativeReals)
646# Constraints
649# ensure that demand is satisfied in each time stage, accounting for cut-downs.
650def demand_satisfied_first_stage_rule(i, model):
651   return (0.0, \
652           sum(model.NumUnitsCutFirstStage[j,i] \
653               for j in model.ProductSizes if j >= i) - model.DemandsFirstStage[i], \
654           None)
656def demand_satisfied_second_stage_rule(i, model):
657   return (0.0, \
658           sum(model.NumUnitsCutSecondStage[j,i] \
659               for j in model.ProductSizes if j >= i) - model.DemandsSecondStage[i], \
660           None)
662model.DemandSatisfiedFirstStage = \
663                Constraint(model.ProductSizes, rule=demand_satisfied_first_stage_rule)
664model.DemandSatisfiedSecondStage = \
665                Constraint(model.ProductSizes, rule=demand_satisfied_second_stage_rule)
667# ensure that you don't produce any units if the decision has been made to disable production.
668def enforce_production_first_stage_rule(i, model):
669   # The production capacity per time stage serves as a simple upper bound for "M".
670   return (None, \
671           model.NumProducedFirstStage[i] - model.Capacity * model.ProduceSizeFirstStage[i], \
672           0.0)
674def enforce_production_second_stage_rule(i, model):
675   # The production capacity per time stage serves as a simple upper bound for "M".   
676   return (None, \
677           model.NumProducedSecondStage[i] - model.Capacity * model.ProduceSizeSecondStage[i], \
678           0.0)
680model.EnforceProductionBinaryFirstStage = \
681                  Constraint(model.ProductSizes, rule=enforce_production_first_stage_rule)
682model.EnforceProductionBinarySecondStage = \
683                  Constraint(model.ProductSizes, rule=enforce_production_second_stage_rule)
685# ensure that the production capacity is not exceeded for each time stage.
686def enforce_capacity_first_stage_rule(model):
687   return (None, \
688           sum(model.NumProducedFirstStage[i] for i in model.ProductSizes) - model.Capacity, \
689           0.0)
691def enforce_capacity_second_stage_rule(model):
692   return (None, \
693           sum(model.NumProducedSecondStage[i] for i in model.ProductSizes) - model.Capacity, \
694           0.0)   
696model.EnforceCapacityLimitFirstStage = Constraint(rule=enforce_capacity_first_stage_rule)
697model.EnforceCapacityLimitSecondStage = Constraint(rule=enforce_capacity_second_stage_rule)
699# ensure that you can't generate inventory out of thin air.
700def enforce_inventory_first_stage_rule(i, model):
701   return (None, \
702           sum(model.NumUnitsCutFirstStage[i,j] \
703               for j in model.ProductSizes if j <= i) - model.NumProducedFirstStage[i], \
704           0.0)
706def enforce_inventory_second_stage_rule(i, model):
707   return (None, \
708           sum(model.NumUnitsCutFirstStage[i,j] \
709               for j in model.ProductSizes \
710                  if j <= i) + sum(model.NumUnitsCutSecondStage[i,j] \
711                     for j in model.ProductSizes if j <= i) \
712               - model.NumProducedFirstStage[i] - model.NumProducedSecondStage[i], \
713           0.0)
715model.EnforceInventoryFirstStage = \
716               Constraint(model.ProductSizes, rule=enforce_inventory_first_stage_rule)
717model.EnforceInventorySecondStage = \
718               Constraint(model.ProductSizes, rule=enforce_inventory_second_stage_rule)
720# stage-specific cost computations.
721def first_stage_cost_rule(model):
722   production_costs = \
723            sum(model.SetupCosts[i] * model.ProduceSizeFirstStage[i] \
724                + model.UnitProductionCosts[i] * model.NumProducedFirstStage[i] \
725                for i in model.ProductSizes)
726   cut_costs = \
727             sum(model.UnitReductionCost * model.NumUnitsCutFirstStage[i,j] \
728                 for (i,j) in model.NumUnitsCutDomain if i != j)
729   return (model.FirstStageCost - production_costs - cut_costs) == 0.0
731model.ComputeFirstStageCost = Constraint(rule=first_stage_cost_rule)
733def second_stage_cost_rule(model):
734   production_costs = \
735                sum(model.SetupCosts[i] * model.ProduceSizeSecondStage[i] \
736                    + model.UnitProductionCosts[i] * model.NumProducedSecondStage[i] \
737                    for i in model.ProductSizes)
738   cut_costs = \
739             sum(model.UnitReductionCost * model.NumUnitsCutSecondStage[i,j] \
740                 for (i,j) in model.NumUnitsCutDomain if i != j)
741   return (model.SecondStageCost - production_costs - cut_costs) == 0.0   
743model.ComputeSecondStageCost = Constraint(rule=second_stage_cost_rule)
746# Objective
749def total_cost_rule(model):
750   return (model.FirstStageCost + model.SecondStageCost)
752model.TotalCostObjective = Objective(rule = total_cost_rule, sense=minimize)
758This file establishes the data for a representative instance. The main thing to notice is that the indexes for
759sizes happen to be given as integers, which is not required: they could have been strings.
764param NumSizes := 10 ;
766param Capacity := 200000 ;
768param DemandsFirstStage := 1 2500 2 7500 3 12500 4 10000 5 35000
769                           6 25000 7 15000 8 12500 9 12500 10 5000 ;
771param DemandsSecondStage := 1 2500 2 7500 3 12500 4 10000 5 35000
772                            6 25000 7 15000 8 12500 9 12500 10 5000 ;
774param UnitProductionCosts := 1 0.748 2 0.7584 3 0.7688 4 0.7792 5 0.7896
775                             6 0.8 7 0.8104 8 0.8208 9 0.8312 10 0.8416 ;
777param SetupCosts := 1 453 2 453 3 453 4 453 5 453 6 453 7 453 8 453 9 453 10 453 ;
779param UnitReductionCost := 0.008 ;
784Here is the data file that describes the stochastics:
789set Stages := FirstStage SecondStage ;
791set Nodes := RootNode
792             Scenario1Node
793             Scenario2Node
794             Scenario3Node
795             Scenario4Node
796             Scenario5Node
797             Scenario6Node
798             Scenario7Node
799             Scenario8Node
800             Scenario9Node
801             Scenario10Node ;
803param NodeStage := RootNode             FirstStage
804                   Scenario1Node        SecondStage
805                   Scenario2Node        SecondStage
806                   Scenario3Node        SecondStage
807                   Scenario4Node        SecondStage
808                   Scenario5Node        SecondStage
809                   Scenario6Node        SecondStage
810                   Scenario7Node        SecondStage
811                   Scenario8Node        SecondStage
812                   Scenario9Node        SecondStage
813                   Scenario10Node       SecondStage ;
815set Children[RootNode] := Scenario1Node
816                          Scenario2Node
817                          Scenario3Node
818                          Scenario4Node
819                          Scenario5Node
820                          Scenario6Node
821                          Scenario7Node
822                          Scenario8Node
823                          Scenario9Node
824                          Scenario10Node ;
826param ConditionalProbability := RootNode          1.0
827                                Scenario1Node     0.10
828                                Scenario2Node     0.10
829                                Scenario3Node     0.10
830                                Scenario4Node     0.10
831                                Scenario5Node     0.10
832                                Scenario6Node     0.10
833                                Scenario7Node     0.10
834                                Scenario8Node     0.10
835                                Scenario9Node     0.10
836                                Scenario10Node    0.10 ;
838set Scenarios := Scenario1
839                 Scenario2
840                 Scenario3
841                 Scenario4
842                 Scenario5
843                 Scenario6
844                 Scenario7
845                 Scenario8
846                 Scenario9
847                 Scenario10 ;
849param ScenarioLeafNode := Scenario1  Scenario1Node 
850                          Scenario2  Scenario2Node 
851                          Scenario3  Scenario3Node 
852                          Scenario4  Scenario4Node 
853                          Scenario5  Scenario5Node 
854                          Scenario6  Scenario6Node 
855                          Scenario7  Scenario7Node 
856                          Scenario8  Scenario8Node 
857                          Scenario9  Scenario9Node 
858                          Scenario10 Scenario10Node ;
860set StageVariables[FirstStage] := ProduceSizeFirstStage[*]
861                                  NumProducedFirstStage[*]
862                                  NumUnitsCutFirstStage[*,*] ;
863set StageVariables[SecondStage] := ProduceSizeSecondStage[*]
864                                   NumProducedSecondStage[*]
865                                   NumUnitsCutSecondStage[*,*] ;
867param StageCostVariable := FirstStage FirstStageCost
868                           SecondStage SecondStageCost ;
875This file provides default values. Even though most of them will be overridden at the variable level, it makes sense
876to provide instance specific defaults. For this problem, it does not seem helpful or prudent to do any fixing of continuous
877variables since they only used to compute the objective function terms. The \verb|ProduceSizeFirstStage| variables are binary
878and fixing their values would seem to determine the value of the other values. The other first stage variables are general integers. The \verb|ProduceSizeFirstStage| can safely be fixed because provided that the largest size is not fixed at zero, there is little risk of infeasibility since larger sizes can be cut (at a cost) to meet demand for smaller sizes. Consequently, it is safe to let the algorithm slam those
879variables as needed. Slamming the other variables is riskier because they could get slammed to values that don't make
880sense given the values of the \verb|ProduceSizeFirstStage| variables.
882Fixing variables that have converged will speed the algorithm, perhaps resulting in a less desirable
883objective function value. It would make sense to tend to fix the \verb|ProduceSizeFirstStage| before fixing the others to avoid
884inconsistencies and because the The \verb|ProduceSizeFirstStage| variables have a strong tendency to imply values for the other
887Here is a sensible and internally consistent, if a bit conservative, wwph.cfg file:
891# wwph.cfg
892# python  commands that are executed by the file
893# to set parameters and parameter defaults
895self.fix_continuous_variables = False
897self.Iteration0MipGap = 0.02
898self.InitialMipGap = 0.10
899self.FinalMipGap = 0.001
901# for all six of these, zero means don't do it.
902self.Iter0FixIfConvergedAtLB = 0 # 1 or 0
903self.Iter0FixIfConvergedAtUB = 0  # 1 or 0
904self.Iter0FixIfConvergedAtNB = 0  # 1 or 0 (converged to a non-bound)
905self.FixWhenItersConvergedAtLB = 0
906self.FixWhenItersConvergedAtUB = 25
907self.FixWhenItersConvergedAtNB = 0  # converged to a non-bound
908self.FixWhenItersConvergedContinuous = 0
910# "default" slamming parms
911# True and False are the options (case sensitive)
912self.CanSlamToLB = False
913self.CanSlamToMin = False
914self.CanSlamToAnywhere = False
915self.CanSlamToMax = True
916self.CanSlamToUB = False
917self.PH_Iters_Between_Cycle_Slams = 5
919# the next line will try to force at least one variable to be
920# fixed every other iteration after iteration 50
921# if anything can be slammed
922self.SlamAfterIter = 50
924self.hash_hit_len_to_slam = 50
925self.W_hash_history_len = 100
929There are a number of things to notice about the contents of this file. Since it is executed
930as python code, the syntax matters. Users should changes values of numbers of change True to False, but
931everything else should not be edited with the exception of comments, which is any text after a sharp sign
932(sometimes called a pound sign). Changes to this file should be tested incrementally because errors are trapped by
933the python interpreter and may be difficult for non-python programmers to decipher.
935This particular example file allows variables to be fixed if all scenarios have agreed on the
936upper bound for five iterations in a row. Since the \verb|ProduceSizeFirstStage| variables are the only discrete
937variables in the first stage with an upper bound, they are the only variables affected by this.
939This example allows slamming, but only to the max across scenarios. This is different than the upper bound, even for
940binary variables, because a variable could be selected for slamming for which all scenarios have agreed on the same value
941(which could be the lower lower bound). Data providing an override for this default as well as control over the selection priority for variables to slam are provided in the wwph.suffixes file.
947# Optional suffixes to help control PH
949# Text triples specifying (variable/variable-index, suffix-name, suffix-value) tuples.
951# override the defaults given in wwph.cfg as needed
952# If no scenario needs the smallest size to be produced, then just forget it
953ProduceSizeFirstStage[1] Iter0FixIfConvergedAtLB 1
955# if all scenarios need either of the two largest sizes, just lock them in
956ProduceSizeFirstStage[9] Iter0FixIfConvergedAtUB 1
957ProduceSizeFirstStage[10] Iter0FixIfConvergedAtUB 1
959# and be aggressive with the smallest size after iteration 0
960ProduceSizeFirstStage[1] FixWhenItersConvergedAtLB 8
962# if the production quantities have been fixed a long time, fix them
963NumProducedFirstStage[*] FixWhenItersConvergedAtNB 20
965# don't allow slamming of variables other than ProduceSize
966# for completeness, disallow all slamming destinations
968NumProducedFirstStage[*] CanSlamToLB False
969NumProducedFirstStage[*] CanSlamToMin False
970NumProducedFirstStage[*] CanSlamToAnywhere False
971NumProducedFirstStage[*] CanSlamToMax False
972NumProducedFirstStage[*] CanSlamToUB False
974NumUnitsCutFirstStage[*,*] CanSlamToLB False
975NumUnitsCutFirstStage[*,*] CanSlamToMin False
976NumUnitsCutFirstStage[*,*] CanSlamToAnywhere False
977NumUnitsCutFirstStage[*,*] CanSlamToMax False
978NumUnitsCutFirstStage[*,*] CanSlamToUB False
980# higher priority means slam these first
981ProduceSizeFirstStage[1] SlammingPriority 1
982ProduceSizeFirstStage[2] SlammingPriority 2
983ProduceSizeFirstStage[3] SlammingPriority 3
984ProduceSizeFirstStage[4] SlammingPriority 4
985ProduceSizeFirstStage[5] SlammingPriority 5
986ProduceSizeFirstStage[6] SlammingPriority 6
987ProduceSizeFirstStage[7] SlammingPriority 7
988ProduceSizeFirstStage[8] SlammingPriority 8
989ProduceSizeFirstStage[9] SlammingPriority 9
990ProduceSizeFirstStage[10] SlammingPriority 10
995The first thing to notice is that this is a data file and not a file of Python comments.
996Consequently, simpler messages are provided if there are errors, but the file can be verbose
997and a computer program or spreadsheet might be required to produce this data file. The next thing to notice
998is that indexes can be specified either using valid index values, or wildcards.
1000This file overrides the defaults to allow some fixing after iteration zero. Fixing the smallest
1001size (index 1) to zero cannot result in infeasibility because larger sizes can be cut to
1002satisfy demand for that size. Along the same lines, aggressively fixing the
1003production indicator for larger sizes to 1 is also
1004safe and perhaps not sub-optimal if all scenarios ``want'' those sizes anyway.
1009# rhosetter.cfg
1011# users probably want this line intact so they can use model_instance
1012model_instance = self._model_instance
1014MyRhoFactor = 0.1
1016for i in model_instance.ProductSizes:
1017   self.setRhoAllScenarios(model_instance.ProduceSizeFirstStage[i], model_instance.SetupCosts[i] * MyRhoFactor)
1018   self.setRhoAllScenarios(model_instance.NumProducedFirstStage[i], model_instance.UnitProductionCosts[i] * MyRhoFactor)
1019   for j in model_instance.ProductSizes:
1020      if j <= i:
1021         self.setRhoAllScenarios(model_instance.NumUnitsCutFirstStage[i,j], model_instance.UnitReductionCost * MyRhoFactor)
1028To run this example, connect to the sizes directory, which is something like:
1033Then use
1035runph --model-directory=models --instance-directory=SIZES10 \
1036--enable-ww-extensions --ww-extension-cfgfile=config/wwph.cfg \
1037--ww-extension-suffixfile=config/wwph.suffixes \
1041Since this instance is so small by modern standards, the enhancement are not needed and may even increase the total solution
1042time. It is provided to illustrate the features of the extensions, not to illustrate why you might need them. Much larger instances are are required for that.
1044\section{Linearizing the Proximal Term \label{LinearSec}}
1048For a decision variable $x$ the proximal term added to the objective function for each scenario subproblem at PH iteration $k$ is
1050\left(x - \bar{x}^{(k-1)}\right)^{2}
1052where $\bar{x}^{(k-1)}$ is the average over the scenarios from the last iteration. Expanding the square reveals that the only
1053quadratic term is $x^{2}$. For binary variables, this is equal to $x$, although this is not the case when a relaxation is solved.
1054For binary variables, the default behaviour is to replace the quadratic term with $x$, but an option allows the quadratic to be
1055retained because this can effect the subproblem solution time due to the use of the quadratic term when the relaxations are solved
1056as part of the branch and bound process.
1058For non-binary variables an option exists to replace the $x^2$ term with a piece-wise approximation and the number of breakpoints in the
1059approximation is under user control. This can have a significant effect on CPU time required to solve sub-problems because the presence of
1060the quadratic term increases the solution times significantly. However, linearization results in a time-quality tradeoff because increasing
1061the number of breakpoints increases the fidelity but each breakpoint introduces another (unseen) binary variable so solution times are generally
1064A few strategies for placing the breakpoints are supported as command a line options:
1065\verb|--breakpoint-strategy|=BREAKPOINT\_STRATEGY. A value of 0
1066indicates uniform distribution. 1 indicates
1067breakpoints at the min and max for the node in the scenario tree, uniformly in-
1068between. A value of 2 indicates more aggressive concentration of
1069breakpoints near the observed node min/max.
1071Upper and lower bounds for variables must be specified when the linearization option is chosen. This can be done by specifying bounds in the
1072reference model or by using the bounds-cfgfile command line option. It is, of course, best to use meaningful bounds provided
1073in the reference model; however, the modeller must be careful not use estimated bounds that are too tight since that could preclude
1074an optimal (or even a feasible) solution to the overall stochastic problem even though it might not cut off any optimal solutions for the
1075particular scenario. The use of a bounds cfgfile is an advanced topic, but enables the modeller to use bounds that cannot create
1080Using a bounds cfgfile is an advanced topic. The modeler is writing python/pyomo code that will be executed by the file that is the core of the PH algorithm. The first executable line in a bounds file is typically:
1084model_instance = self._model_instance
1087This establishes a local variable called ``model\_instance'' that can be used to refer to the model (of course, a different name can be
1088used, such as MODINS). The object ``self'' on the right hand side of this assignment refers to the core PH object. The model, in turn contains the parameters and
1089variables defined in the file that can be accessed by name. For example, with the Farmer model, the cfg file
1090sets the uppter bound on DevotedAcreage to be value of the paramter TOTAL\_ACREAGE at intialization (since this is Python, the
1091parentheses after TOTAL\_ACREAGE cause the value in TOTAL\_ACREAGE to be assigned rather than the name of the parameter object:
1094# only need to set upper bounds on first-stage variables, i.e., those being blended.
1096model_instance = self._model_instance
1098# the values supplied to the method
1099upper_bound = float(model_instance.TOTAL_ACREAGE())
1101for index in model_instance.CROPS:
1102   # the lower and upper bounds are expected to be floats, and trigger an exception if not.
1103   self.setVariableBoundsAllScenarios("DevotedAcreage", index, 0.0, upper_bound)
1106The same thing could be accomplished by setting the upper bound in the model file, but it does serve as a simple example of a bounds cfg file.
1108%\subsection{Sizes Example}
1111\section{Solving sub-problems in Parallel \label{parallelsec}}
1113Pyomo makes use of the pyro facility to enable sub-problems to easily be assigned to be solved in parallel. This capability is suppored by pysp. We will refer to a single master computer
1114and multiple slave computers in the discussion here, but actually, the master computing processes can (and for synchronous parallel, should) be on a
1115processor that also runs a slave process.
1117Here are the commands in order:
1119\item On the master: \verb|coopr-ns|
1120\item On the master: \verb|dispatch_srvr|
1121\item On each slave: \verb|pyro_mip_server|
1122\item On the master: \verb|runph ... --solver-manager=pyro ...|
1124Note that the command \verb|coopr-ns| and the argument \verb|solver-manger| have a dash in the middle, while
1125the commands \verb|dispatch_srvr| and \verb|pyro_mip_server| have underscores. The first three commands launch processes
1126that have no internal mechanism for termination; i.e., they will be terminated only if they crash or if they are
1127killed by an external process. It is common to launch these processes with output redirection, such as \verb|coopr-ns >& cooprns.log|. The
1128\verb|runph| command is a normal runph command with the usual arguments with the additional specification that subproblem solves should
1129be directed to the pyro solver manager.
Note: See TracBrowser for help on using the repository browser.