1 | % pyspbody.tex |
---|
2 | \def\Argmax{\mathop{\rm argmax}} |
---|
3 | \newcommand{\xBar}{\Vector{\overline{x}}} |
---|
4 | \newcommand{\Bar}[1]{\overline{#1}} |
---|
5 | \newcommand{\Prl}{\mbox{Pr}(\ell)} |
---|
6 | |
---|
7 | \newcommand{\Vector}[1]{\mbox{{\boldmath $#1$}}} |
---|
8 | \newcommand{\Matrix}[1]{\mbox{{\boldmath $#1$}}} |
---|
9 | \newcommand{\VectLen}[1]{\mid\Vector{#1}\mid} |
---|
10 | \newcommand{\Rint}[1]{\int_{#1}^{\infty}} |
---|
11 | %\newcommand{\Size}[1]{\| #1 \|} |
---|
12 | \newcommand{\Size}[1]{\mid #1 \mid} |
---|
13 | \newcommand{\Floor}[1]{\lfloor #1 \rfloor} |
---|
14 | \newcommand{\Ceil}[1]{\left\lceil #1 \right\rceil} |
---|
15 | |
---|
16 | \section{Overview} |
---|
17 | |
---|
18 | The pysp package extends the pyomo modeling language to support multi-stage stochastic programs with |
---|
19 | enumerated scenarios. Pyomo and pysp are Python version 2.6 programs. In order to |
---|
20 | specify a program, the user must provide a reference model and a scenario tree. |
---|
21 | |
---|
22 | Provided and the necessary paths have been communicated to the operating system, the command to execute the pysp package is |
---|
23 | of the form: |
---|
24 | \begin{verbatim} |
---|
25 | runph |
---|
26 | \end{verbatim} |
---|
27 | It is possible, |
---|
28 | and generally necessary, to provide command line arguments. The simplest argument causes the program to output help text: |
---|
29 | \begin{verbatim} |
---|
30 | runph --help |
---|
31 | \end{verbatim} |
---|
32 | but notice that there are two dashes before the word ``help.'' Command line arguments are summarized in Section~\ref{cmdargsec}. |
---|
33 | |
---|
34 | The underlying algorithm in pysp |
---|
35 | is based on Progressive Hedging (PH) \cite{RockafellarWets}, which decomposes the problem into sub-problems, one for |
---|
36 | each scenario. |
---|
37 | The algorithm progressively computes {\em weights} corresponding to each variable to force convergence |
---|
38 | and also makes use of a {\em proximal} term that provides a penalty for the squared deviation from the |
---|
39 | mean solution from the last PH iteration. |
---|
40 | |
---|
41 | \subsection{Reference Model} |
---|
42 | |
---|
43 | The reference model describes the problem for a canonical scenario. It does not make use of, or describe, a scenario index or any information about uncertainty. Typically, it is just the model that would be used if there were only a single scenario. It is given as a pyomo file. Data from an arbitrary scenario is needed to instantiate. |
---|
44 | |
---|
45 | The objective function needs to be separated by stages. The term for each stage should be ``assigned'' (i.e., constrained to be equal to) |
---|
46 | a variable. These variables names are reported in ScenarioStructure.dat so that they can be used for reporting purposes. |
---|
47 | |
---|
48 | \subsection{Scenario Tree} |
---|
49 | |
---|
50 | The scenario tree provides information about the time stages and the nature of the uncertainties. In order to specify a tree, we must |
---|
51 | indicate the time stages at which information becomes available. We also specify the nodes of a tree to indicate which variables |
---|
52 | are associated with which realization at each stage. The data for each scenario is provided in separate data files, one for each scenario. |
---|
53 | |
---|
54 | \section{File Structure} |
---|
55 | |
---|
56 | \begin{itemize} |
---|
57 | \item ReferenceModel.py (A pyomo model file) |
---|
58 | \item ReferenceModel.dat (data for an arbitrary scenario) |
---|
59 | \item ScenarioStructure.dat (among other things: the scenario names: Sname) |
---|
60 | \item *Sname.dat (full data for now) one file for each scenario |
---|
61 | \end{itemize} |
---|
62 | |
---|
63 | In this list we use ``Sname'' as the generic scenario name. The file scenariostructure.dat gives the names of all the scenarios and for each scenario there is a data file with the same name and the suffix ``.dat'' that contains the full specification of data for the scenario. |
---|
64 | |
---|
65 | \subsection{ScenarioStructure.dat} |
---|
66 | |
---|
67 | The file ScenarioStucture.dat contains the following data: |
---|
68 | |
---|
69 | \begin{itemize} |
---|
70 | \item set Scenarios: List of the names of the scenarios. These names will subsequently be used as indexes in this data file and these names will |
---|
71 | also be used as the root file names for the scenario data files (each of these will have a .dat extension) if the |
---|
72 | parameter ScenarioBasedData is set to True, which is the default. |
---|
73 | \item[] |
---|
74 | item set Stages: List of the names of the time stages, which must be given in time order. In the sequel we will use {\sc StageName} to represent a node name used |
---|
75 | as an index. |
---|
76 | \item[] |
---|
77 | \item set Nodes: List of the names of the nodes in the scenario tree. In the sequel we will use {\sc NodeName} to represent a node name used |
---|
78 | as an index. |
---|
79 | \item[] |
---|
80 | \item param NodeStage: A list of pairs of nodes and stages to indicate the stage for each node. |
---|
81 | \item[] |
---|
82 | \item param Parent: A list of node pairs to indicate the parent of each node that has a parent (the root node will not be listed). |
---|
83 | \item[] |
---|
84 | \item set Children[{\sc NodeName}]: For each node that has children, provide the list of children. No sets will be give for leaf nodes. |
---|
85 | \item[] |
---|
86 | \item param ConditionalProbability: For each node in the scenario tree, give the conditional probability. For the root node it must be given as 1 and for |
---|
87 | the children of any node with children, the conditional probabilities must sum to 1. |
---|
88 | \item[] |
---|
89 | \item param ScenarioLeafNode: A list of scenario and node pairs to indicate the leaf node for each scenario. |
---|
90 | \item[] |
---|
91 | \item set StageVariables[{\sc StageName}]: For each stage, list the pyomo model variables associated with that stage. |
---|
92 | \end{itemize} |
---|
93 | |
---|
94 | Data |
---|
95 | to instantiate these sets and parameters is provided by users in the file ScenarioStructure.dat, which can be given in AMPL \cite{ampl} format. |
---|
96 | |
---|
97 | The default behavior is one file per scenario and each file has the full data for the scenario. An |
---|
98 | alternative is to specify just the data that changes from the root node in one file per tree node. |
---|
99 | To select this option, add the following line to ScenarioStructure.dat: |
---|
100 | |
---|
101 | \verb|param ScenarioBasedData := False ;| |
---|
102 | |
---|
103 | This will set it up to want a per-node file, something along the lines of what's in \verb|examples/pysp/farmer/NODEDATA|. |
---|
104 | |
---|
105 | Advanced users may be interested in seeing the file \verb|coopr/pysp/utils/scenariomodels.py|, which defines the python sets and parameters needed to describe stochastic elements. This file should not be edited. |
---|
106 | |
---|
107 | \section{Command Line Arguments \label{cmdargsec}} |
---|
108 | |
---|
109 | The basic PH algorithm is controlled by parameters that are set as command line arguments. Note that options begin with a double dash. |
---|
110 | \begin{itemize} |
---|
111 | \item \verb|-h|, \verb|--help|\\ Show help message and exit. |
---|
112 | \item \verb|--verbose|\\ Generate verbose output for both initialization and execution. Default is False. |
---|
113 | \item \verb|--report-solutions|\\ Always report PH solutions after each iteration. Enabled if --verbose is enabled. Default is False. |
---|
114 | \item \verb|--report-weights|\\ Always report PH weights prior to each iteration. Enabled if --verbose is enabled. Default is False. |
---|
115 | \item \verb|--model-directory|=MODEL\_DIRECTORY\\ |
---|
116 | The directory in which all model (reference and scenario) definitions are stored. I.e., the ``.py'' files. Default is ".". |
---|
117 | \item \verb|--instance-directory|=INSTANCE\_DIRECTORY\\ |
---|
118 | The directory in which all instance (reference and scenario) definitions are stored. I.e., the ``.dat'' files. Default is ".". |
---|
119 | \item \verb|--solver|=SOLVER\_TYPE\\ The type of solver used to solve scenario sub-problems. Default is cplex. |
---|
120 | \item \verb|--solver-manager|=SOLVER\_MANAGER\_TYPE\\ |
---|
121 | The type of solver manager used to coordinate scenario sub-problem solves. Default is serial. This option is changed in parallel applications |
---|
122 | as described in Section~\ref{parallelsec}. |
---|
123 | \item \verb|--max-iterations|=MAX\_ITERATIONS\\ |
---|
124 | The maximal number of PH iterations. Default is 100. |
---|
125 | \item \verb|--default-rho|=DEFAULT\_RHO\\ |
---|
126 | The default (global) rho for all blended variables. Default is 1. |
---|
127 | \item \verb|--rho-cfgfile|=RHO\_CFGFILE\\ |
---|
128 | The name of a configuration script to compute PH rho values. Default is None. |
---|
129 | \item \verb|--enable-termdiff|-convergence\\ |
---|
130 | Terminate PH based on the termdiff convergence metric. The convergcne metric is the unscaled sum of differences between variable values and the mean. Default is True. |
---|
131 | \item \verb|--enable-normalized|-termdiff-convergence\\ |
---|
132 | Terminate PH based on the normalized termdiff |
---|
133 | convergence metric. Each term in the termdiff sum is normalized by the average value (NOTE: it is NOT normalized by the number of scenarios). Default is False. |
---|
134 | \item \verb|--termdiff-threshold|=TERMDIFF\_THRESHOLD\\ |
---|
135 | The convergence threshold used in the term-diff and |
---|
136 | normalized term-diff convergence criteria. Default is |
---|
137 | 0.01, which is too low for most problems. |
---|
138 | \item \verb|--enable-free-discrete-count-convergence|\\ |
---|
139 | Terminate PH based on the free discrete variable count convergence metric. Default is False. |
---|
140 | \item \verb|--free-discrete-count-threshold|=FREE\_DISCRETE\_COUNT\_THRESHOLD\\ |
---|
141 | The convergence threshold used in the criterion based on when the free discrete variable count convergence criterion. Default is 20. |
---|
142 | \item \verb|--enable-ww-extensions|\\ |
---|
143 | Enable the Watson-Woodruff PH extensions plugin. Default is False. |
---|
144 | \item \verb|--ww-extension-cfgfile|=WW\_EXTENSION\_CFGFILE\\ |
---|
145 | The name of a configuration file for the Watson-Woodruff PH extensions plugin. Default is wwph.cfg. |
---|
146 | \item \verb|--ww-extension-suffixfile|=WW\_EXTENSION\_SUFFIXFILE\\ |
---|
147 | The name of a variable suffix file for the Watson-Woodruff PH extensions plugin. Default is wwph.suffixes. |
---|
148 | \item \verb|--user-defined-extension|=EXTENSIONFILE. Here, "EXTENSIONFILE" is the module name, which is in either the current directory (most likely) or somewhere on your PYTHONPATH. A simple example is "testphextension" plugin that simply prints a message to the screen for each callback. The file testphextension.py can be found in the sources directory and is shown in Section~\ref{ExtensionDetailsSec}. A test of this would be to specify "-user-defined-extension=testphextension", assuming testphextension.py is in your PYTHONPATH or current directory. |
---|
149 | |
---|
150 | Note that both PH extensions (WW PH and your own) can co-exist; however, the WW plugin will be invoked first. |
---|
151 | |
---|
152 | \item \verb|--scenario-solver-options| The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the sub-problem solver. |
---|
153 | Whatever options specified are persistent across all solves. |
---|
154 | \item \verb|--ef-solver-options| The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the EF problem solver. |
---|
155 | \item \verb|--write-ef|\\ Upon termination, write the extensive form of the model - accounting for all fixed variables. |
---|
156 | \item \verb|--solve-ef|\\ Following write of the extensive form model, solve it. |
---|
157 | \item \verb|--ef-output-file|=EF\_OUTPUT\_FILE\\ |
---|
158 | The name of the extensive form output file (currently only LP format is supported), if writing of the extensive form is enabled. Default is efout.lp. |
---|
159 | \item \verb|--suppress-continuous-variable-output|\\ |
---|
160 | Eliminate PH-related output involving continuous variables. Default: no output. |
---|
161 | \item \verb|--keep-solver-files|\\ Retain temporary input and output files for scenario sub-problem solves. Default: files not kept. |
---|
162 | \item \verb|--output-solver-logs|\\ Output solver logs during scenario sub-problem solves. Default: no output. |
---|
163 | \item \verb|--output-ef-solver-log|\\ |
---|
164 | Output solver log during the extensive form solve. Default: no output. |
---|
165 | \item \verb|--output-solver-results|\\ |
---|
166 | Output solutions obtained after each scenario sub-problem solve. Default: no output. |
---|
167 | \item \verb|--output-times|\\ Output timing statistics for various PH components. Default: no output. |
---|
168 | \item \verb|--disable-warmstarts|\\ |
---|
169 | Disable warm-start of scenario sub-problem solves in PH iterations >= 1. |
---|
170 | Default=False (i.e., warm starts are the default). |
---|
171 | \item \verb|--drop-proximal-terms|\\ |
---|
172 | Eliminate proximal terms (i.e., the quadratic penalty terms) from the weighted PH objective. |
---|
173 | Default=False (i.e., but default, the proximal terms are included). |
---|
174 | \item \verb|--retain-quadratic-binary-terms|\\ |
---|
175 | Do not linearize PH objective terms involving binary decision variables. |
---|
176 | Default=False (i.e., the proximal term for binary variables is linearized by default; this can have some impact on the relaxations during the branch and bound solution process). |
---|
177 | \item \verb|--linearize-nonbinary-penalty-terms|=BPTS\\ |
---|
178 | Approximate the PH quadratic term for non-binary variables with a piece-wise linear function. The argument |
---|
179 | BPTS gives the number of breakpoints in the linear approximation. The default=0. Reasonable non-zero |
---|
180 | values are usually in the range of 3 to 7. Note that if a breakpoint would be very close to a variable |
---|
181 | bound, then the break point is ommited. IMPORTANT: this option requires that all variables have bounds that |
---|
182 | are either established in the reference model or by code specfied using the bounds-cfgfile command line option. See Section~\ref{LinearSec} |
---|
183 | for more information about linearizing the proximal term. |
---|
184 | \item \verb|--breakpoint-strategy|=BREAKPOINT\_STRATEGY |
---|
185 | Specify the strategy to distribute breakpoints on the |
---|
186 | [lb, ub] interval of each variable when linearizing. 0 |
---|
187 | indicates uniform distribution. 1 indicates |
---|
188 | breakpoints at the node min and max, uniformly in- |
---|
189 | between. 2 indicates more aggressive concentration of |
---|
190 | breakpoints near the observed node min/max. |
---|
191 | \item \verb|--bounds-cfgfile|=BOUNDS\_CFGFILE\\ |
---|
192 | The argument BOUNDS\_CFGFILE specifies the name of an executable pyomo file that sets bounds. The devault is that there is no file. |
---|
193 | When specified, the code in this file is executed after the initialization of scenario data so the |
---|
194 | bounds can be based on data from all scenarios. The config subdirectory of the farmer example contains a simple example of |
---|
195 | such a file (boundsetter.cfg). |
---|
196 | \item \verb|--checkpoint-interval|\\ |
---|
197 | The number of iterations between writing of a checkpoint file. Default is 0, indicating never. |
---|
198 | \item \verb|--restore-from-checkpoint|\\ |
---|
199 | The name of the checkpoint file from which PH should be initialized. Default is not to restore from a checkpoint. |
---|
200 | \item \verb|--profile=PROFILE|\\ Enable profiling of Python code. The value of this option is the number of functions that are summarized. The default is no profiling. |
---|
201 | \item \verb|--enable-gc|\\ Enable the python garbage collecter. The default is no garbage collection. |
---|
202 | \end{itemize} |
---|
203 | |
---|
204 | \section{Extensions via Callbacks \label{CallbackSec}} |
---|
205 | |
---|
206 | Basic PH can converge slowly, so it is usually advisable to extend it or modify it. In pysp, this is done via the pyomo plug-in |
---|
207 | mechanism. The basic PH implementation provides callbacks that enable access to the data structures used by the algorithm. |
---|
208 | In \S\ref{WWExtensionSec} we describe extensions that are provided with the release. In \S\ref{ExtensionDetailsSec}, |
---|
209 | we provide information to power users who may wish to modify or replace the extensions. |
---|
210 | |
---|
211 | \subsection{Watson and Woodruff Extensions \label{WWExtensionSec}} |
---|
212 | |
---|
213 | Watson and Woodruff describe innovations for accelerating PH \cite{phinnovate}, most of which are generalized and |
---|
214 | implemented in the |
---|
215 | file \verb|wwextension.py|, but users generally do not need to know this file name. To invoke the program |
---|
216 | with these additional features, invoke the software with a command of the form: |
---|
217 | \begin{verbatim} |
---|
218 | runph --enable-ww-extensions |
---|
219 | \end{verbatim} |
---|
220 | Many of the examples described in \S\ref{ExampleSec} use this plug-in. The main concept |
---|
221 | is that some integer variables should be fixed as the algorithm progresses for two reasons: |
---|
222 | |
---|
223 | \begin{itemize} |
---|
224 | \item Convergence detection: |
---|
225 | A detailed analysis of PH algorithm behavior on a variety of problem indicates that individual decision variables |
---|
226 | frequently converge to specific, |
---|
227 | fixed values scenarios in early PH iterations. Further, despite interactions among the |
---|
228 | the variables, the value frequently does not change in subsequent PH |
---|
229 | iterations. Such variable ``fixing'' behaviors lead to a potentially powerful, albeit obvious, heuristic: once |
---|
230 | a particular variable has been the same in all scenarios for some number of iterations, fix it to that value. |
---|
231 | For problems where the constraints effectively limit $x$ |
---|
232 | from both sides, these methods may result in PH encountering infeasible scenario sub-problems even though the |
---|
233 | problem is ultimately feasible. |
---|
234 | \item Cycle detection: When there are integer variables, cycling is sometimes |
---|
235 | encountered, consequently, cycle detection and avoidance mechanisms are required to force |
---|
236 | eventual convergence of the PH algorithm in the mixed-integer case. To detect cycles, we focus on |
---|
237 | repeated occurrences of the weights, implemented using a simple hashing scheme \cite{tabuhash} to |
---|
238 | minimize impact on run-time. Once a cycle in the weight vectors associated with any decision variable is detected, |
---|
239 | the value of that variable is fixed. |
---|
240 | \end{itemize} |
---|
241 | |
---|
242 | Fixing variables aggressively can result in shorter solution times, but can also result in solutions that are not as good. Furthermore, for some problems, aggressive fixing can result in infeasible sub-problems even though the problem is |
---|
243 | ultimately feasible. Many of the parameters discussed in the next subsections control fixing of variables. This is |
---|
244 | discussed in a tutorial in section~\ref{WWTutorialSec}. |
---|
245 | |
---|
246 | \subsubsection{Variable Specific Parameters} |
---|
247 | |
---|
248 | The plug-in makes use of parameters to control behavior at the variable level. Global defaults (to override the defaults stated here) should be set using methods described in \S\ref{ParmSec}. Values for each variable should be set using methods described in \S\ref{SuffixSec}. |
---|
249 | Note that for variable fixing based on convergence detection, iteration zero is treated separately. The parameters |
---|
250 | are as follows: |
---|
251 | |
---|
252 | \begin{itemize} |
---|
253 | \item fix\_continuous\_variables: True or False. If true, fixing applies to all variables. If false, then fixing applies only to discrete variables. |
---|
254 | \item Iter0FixIfConvergedAtLB: 1 (True) or 0 (False). If 1, then discrete variables that are at their lower bound in all scenarios after |
---|
255 | the iteration zero solves will be fixed at that bound. |
---|
256 | \item Iter0FixIfConvergedAtUB: 1 (True) or 0 (False). If 1, then discrete variables that are at their upper bound in all sce<narios after |
---|
257 | the iteration zero solves will be fixed at that bound. |
---|
258 | \item Iter0FixIfConvergedAtNB: = 1 1 (True) or 0 (False). If 1, then discrete variables that are at the same value in all scenarios after |
---|
259 | the iteration zero solves will be fixed at that value, without regard to whether it is a bound. If this is true, it takes precedence. A value of zero, on the other hand, implies that variables will not be fixed at at a non-bound. |
---|
260 | \item FixWhenItersConvergedAtLB: The number of consecutive PH iterations that discrete variables must be their lower bound in all scenarios before they |
---|
261 | will be fixed at that bound. A value of zero implies that variables will not be fixed at the bound. |
---|
262 | \item FixWhenItersConvergedAtUB: The number of consecutive PH iterations that discrete variables must be their upper bound in all scenarios before they |
---|
263 | will be fixed at that bound. A value of zero implies that variables will not be fixed at the bound. |
---|
264 | \item FixWhenItersConvergedAtNB: The number of consecutive PH iterations that discrete variables must be at the same, consistent value in all scenarios before they |
---|
265 | will be fixed at that value, without regard to whether it is a bound. If this is true, it takes precedence. A value of zero, on the other hand, implies that variables will not be fixed at at a non-bound. |
---|
266 | \item FixWhenItersConvergedContinuous: The number of consecutive PH iterations that continuous variables must be at the same, consistent value in all scenarios before they |
---|
267 | will be fixed at that value. A value of zero implies that continuous variables will not be fixed. |
---|
268 | \item CanSlamToLB: True or False. If True, then slamming can be to the lower bound for any variable. |
---|
269 | \item CanSlamToMin: True or False. If True, then slamming can be to the minimum across scenarios for any variable. |
---|
270 | \item CanSlamToAnywhere: True or False. If True, then slamming can be to any value. |
---|
271 | \item CanSlamToMax: True or False. If True, then slamming can be to the maximum across scenarios for any variable. |
---|
272 | \item CanSlamToUB: True of False. If True, then slamming can be to the upper bound for any variable. |
---|
273 | \item DisableCycleDetection: True or False. If True, then cycle detection and the associated slamming are completely disabled. This cannot be changed to False on the fly because a value of True at startup causes creation of the cycle detection storage to be bypassed. |
---|
274 | \end{itemize} |
---|
275 | |
---|
276 | In the event that multiple slam targets are True, then Min and Max trump LB and UB while Anywhere trumps all. |
---|
277 | |
---|
278 | \subsection{General Parameters} |
---|
279 | |
---|
280 | The plug-in also makes use of the following parameters, which should be set using methods described in \S\ref{ParmSec}. |
---|
281 | \begin{itemize} |
---|
282 | \item Iteration0Mipgap: Gives the mipgap to be sent to the solver for iteration zero solves. The default is zero, which causes nothing to be sent to the solver (i.e., the solver uses its default mipgap). |
---|
283 | \item InitalMipGap: Gives the mipgap to be sent to the solver for iteration one solves. The default is zero, which causes nothing to be sent to the solver (i.e., the solver uses its default mipgap). If not zero, then this gap will be used as the starting point for the mipgap to change on each PH iteration in proportion to the convergence termination criterion so that when the criterion for termination is met |
---|
284 | the mipgap will be at (or near) the parameter value of FinalMipGap. If the InitialMipGap is significantly higher than the Iteration0MipGap parameter, the PH algorithm may |
---|
285 | perform poorly. This is because the iteration k-1 solutions are used to warm start iteration k-1 solves. If the iteration 1 mipgap is much higher than the iteration 0 mipgap, the iteration zero |
---|
286 | solution, although not optimal for the iteration one objective, might be within the mipgap. Default: 0.10. |
---|
287 | \item FinalMipGap: The target for the mipgap when PH has converged. Default: 0.001. |
---|
288 | \item PH\_Iters\_Between\_Cycle\_Slams: controls the number of iterations to wait after a variable is slammed due to |
---|
289 | hash hits that suggest convergence. Zero indicates unlimited slams per cycle. Default: 1. |
---|
290 | \item SlamAfterIter: Iteration number after which one variable every other iteration will be slammed to force convergence. Default: the number of scenarios. |
---|
291 | \item hash\_hit\_len\_to\_slam: Ignore possible cycles for which the only evidence of a cycle is less than this. Also, ignore cycles if any variables have been fixed in the previous hash\_hit\_len\_to\_slam PH iterations. Default: |
---|
292 | the number of scenarios. This default is often not a good choice. For many problems with a lot of scenarios, |
---|
293 | a value like 10 or 20 might be a lot better if rapid convergence is desired. |
---|
294 | \item W\_hash\_history\_len: This obscure parameter controls how far back the code will look to see if there is a possible cycle. Default: max(100, number of scenarios). |
---|
295 | \end{itemize} |
---|
296 | |
---|
297 | \subsubsection{Setting Parameter Values \label{ParmSec}} |
---|
298 | |
---|
299 | The parameters of PH and of any callbacks can be changed using the file wwph.cfg, which is executed by the python interpreter after PH has initialized. This is |
---|
300 | a potentially powerful and/or dangerous procedure because it gives an opportunity |
---|
301 | to change anything using the full features of python and pyomo. |
---|
302 | |
---|
303 | \subsubsection{Setting Suffix Values \label{SuffixSec}} |
---|
304 | |
---|
305 | Suffixes are set using the data file named \verb|wwph.suffixes| using this syntax: |
---|
306 | |
---|
307 | \begin{verbatim} |
---|
308 | VARSPEC SUFFIX VALUE |
---|
309 | \end{verbatim} |
---|
310 | |
---|
311 | where VARSPEC is replaced by a variable specification, SUFFIX is replaced by a suffix name and VALUE is replaced by the value of the suffix for |
---|
312 | that variable or those variables. Here is an example: |
---|
313 | |
---|
314 | \begin{verbatim} |
---|
315 | Delta CanSlamToLB False |
---|
316 | Gamma[*,Ano1] SlammingPriority 10 |
---|
317 | Gamma[*,Ano2] SlammingPriority 20 |
---|
318 | ... |
---|
319 | \end{verbatim} |
---|
320 | |
---|
321 | |
---|
322 | \subsection{Callback Details \label{ExtensionDetailsSec}} |
---|
323 | |
---|
324 | Most users of pysp can skip this subsection. |
---|
325 | A callback class definition named iphextension is in the file iphextension.py and can be used to implement callbacks at a |
---|
326 | variety of points in PH. For example, the method post\_iteration\_0\_solves is called immediately after all iteration zero |
---|
327 | solves, but before averages and weights have been computed while the method post\_iteration\_0 is called after averages and |
---|
328 | weights based on iteration zero have been computed. The file iphextension is in the coopr/pysp directory and is not |
---|
329 | intended to be edited by users. |
---|
330 | |
---|
331 | The user defines a class derived from SingletonPlugin that implements iphextension. Its name is given to ph as an option. This class will be automatically instantiated by ph. |
---|
332 | It has access to data and methods in the PH class, which are defined in the file ph.py. An example of such a class is in the |
---|
333 | file named testphextension.py in the pysp example directory. |
---|
334 | |
---|
335 | A user defined extension file can be incorporated by using the command line option: |
---|
336 | \verb|--user-defined-extension=EXTENSIONFILE|. Here, "EXTENSIONFILE" is the module name, which is in either the current directory (most likely) or somewhere on your PYTHONPATH. A simple example is "testphextension" plugin that simply prints a message to the screen for each callback. The file testphextension.py can be found in the sources directory and given in Section~\ref{CallbackSec}. An easy test of this would be to specify "-user-defined-extension=testphextension" and you should |
---|
337 | note the the ``.py'' file extension is not included on the runph command line. |
---|
338 | |
---|
339 | Both your own extension and the WWPH extensions can co-exist; however, the WW plugin will be invoked first at each callback point if both are included. |
---|
340 | |
---|
341 | Here are the callbacks: |
---|
342 | \begin{itemize} |
---|
343 | \item post\_ph\_initialization: Called after PH data structures have been intialized but before iteration zero solves. |
---|
344 | \item post\_iteration\_0\_solves: Called after iteration zero solutions and some statistics such as averages have been computed, but before weights are updated. |
---|
345 | \item post\_iteration\_0: Called after all processing for iteration zero is complete. |
---|
346 | \item post\_iteration\_k\_solves: Called after solutions some statistics such as averages have been computed, but before weights are updated for iterations after iteration zero. |
---|
347 | \item post\_iteration\_k: Called after all processing for each iteration after iteration 0 is complete. |
---|
348 | \item post\_ph\_execution: Called execution is complete. |
---|
349 | \end{itemize} |
---|
350 | |
---|
351 | Users interested in writing their own extensions will probably want to refer to the source file ph.py to get a deeper understanding of when the callback occur. |
---|
352 | |
---|
353 | \section{Examples \label{ExampleSec}} |
---|
354 | |
---|
355 | A number of examples are provided with pysp. |
---|
356 | |
---|
357 | \subsection{Farmer Example} |
---|
358 | |
---|
359 | This two-stage example is composed of models and data for the "Farmer" stochastic program, introduced in Section 1.1 of |
---|
360 | "Introduction to Stochastic Programming" by Birge and |
---|
361 | Louveaux \cite{spbook}. |
---|
362 | |
---|
363 | \begin{itemize} |
---|
364 | \item ReferenceModel.py: a single-scenario model for the SP |
---|
365 | \item ReferenceModel.dat: a single-scenario data file for the SP (any scenario will do - used to flush out variable and constraint index sets) |
---|
366 | \item ScenarioStructure.dat: data file defining the scenario tree. |
---|
367 | \item AboveAverageScenario.dat: one of the scenario data files. |
---|
368 | \item BelowAverageScenario.dat: one of the scenario data files. |
---|
369 | \item AverageScenario.dat: one of the scenario data files. |
---|
370 | \end{itemize} |
---|
371 | |
---|
372 | The command runph executes PH, assuming the ReferenceModel.* and ScenarioStructure.* files are present and correct. |
---|
373 | This example is probably in a directory with a name something like: |
---|
374 | |
---|
375 | \begin{verbatim} |
---|
376 | coopr\examples\pysp\farmer |
---|
377 | \end{verbatim} |
---|
378 | |
---|
379 | The data is in a subdirectory called scenariodata and the model is in the models subdirectory. To |
---|
380 | invoke PH for this problem, connect to this farmer directory and use the command: |
---|
381 | |
---|
382 | \begin{verbatim} |
---|
383 | runph --model-directory=models --instance-directory=scenariodata |
---|
384 | \end{verbatim} |
---|
385 | |
---|
386 | \subsection{Sizes Example} |
---|
387 | |
---|
388 | This two-stage example is composed of models and data for the "Sizes" stochastic program \cite{sizes,lokwood}, which consists of the following files: |
---|
389 | |
---|
390 | \begin{itemize} |
---|
391 | \item wwph.cfg: replace default algorithm parameter values for the Watson and Woodruff extensions. |
---|
392 | \item wwph.suffixes: sets algorithm parameter values at the variables level for the Watson and Woodruff extensions. |
---|
393 | \item ReferenceModel.py: a single-scenario model for the SP |
---|
394 | \item ReferenceModel.dat: a single-scenario data file for the SP (any scenario will do - used to flush out variable and constraint index sets) |
---|
395 | \item ScenarioStructure.dat: data file defining the scenario tree. |
---|
396 | \item Scenario1.dat: one of the scenario data files. |
---|
397 | \item Scenario2.dat: one of the scenario data files. |
---|
398 | \item ... |
---|
399 | \end{itemize} |
---|
400 | |
---|
401 | This example is probably in a directory with a name something like: |
---|
402 | |
---|
403 | \begin{verbatim} |
---|
404 | coopr\packages\coopr\examples\pysp\sizes |
---|
405 | \end{verbatim} |
---|
406 | |
---|
407 | The data for a three scenario version is in a subdirectory called SIZES3 and a ten scenario |
---|
408 | dataset is in SIZES10. |
---|
409 | |
---|
410 | To invoke PH for the 10 scenario problem, connect to the sizes directory and use the command: |
---|
411 | |
---|
412 | \begin{verbatim} |
---|
413 | runph --model-directory=models --instance-directory=SIZES10 |
---|
414 | \end{verbatim} |
---|
415 | |
---|
416 | |
---|
417 | \subsection{Forestry Example} |
---|
418 | |
---|
419 | This four-stage example is composed of models and data for the ``forestry'' stochastic program \cite{}, which consists of the following files: |
---|
420 | |
---|
421 | \begin{itemize} |
---|
422 | \item wwph.cfg: replace default algorithm parameter values for the Watson and Woodruff extensions. |
---|
423 | \item wwph.suffixes: sets algorithm parameter values at the variables level for the Watson and Woodruff extensions. |
---|
424 | \item ReferenceModel.py: a single-scenario model for the SP |
---|
425 | \item ReferenceModel.dat: a single-scenario data file for the SP (any scenario will do - used to flush out variable and constraint index sets) |
---|
426 | \item ScenarioStructure.dat: data file defining the scenario tree. |
---|
427 | \item Scenario1.dat: one of the scenario data files. |
---|
428 | \item Scenario2.dat: one of the scenario data files. |
---|
429 | \item ... |
---|
430 | \end{itemize} |
---|
431 | |
---|
432 | This example is probably in a directory with a name something like: |
---|
433 | |
---|
434 | \begin{verbatim} |
---|
435 | coopr\packages\coopr\examples\pysp\forestry |
---|
436 | \end{verbatim} |
---|
437 | |
---|
438 | |
---|
439 | There are two families of instances: ``Chile'' and ``Davis,'' each with four stages and eighteen scenarios. This is also a small |
---|
440 | two-stage, four scenario instances in the subdirectory DAVIS2STAGE. |
---|
441 | |
---|
442 | This full 18 scenario problem instance takes too long without the Watson Woodruff extensions, but to |
---|
443 | invoke PH for this problem without them, connect to the forestry directory and use the command: |
---|
444 | |
---|
445 | \begin{verbatim} |
---|
446 | runph --models-directory=models --instancs-directory=chile |
---|
447 | \end{verbatim} |
---|
448 | and to run with the extensions, use |
---|
449 | \begin{verbatim} |
---|
450 | runph --model-directory=models --instance-directory=davis \ |
---|
451 | --enable-ww-extensions --ww-extension-cfgfile=config/wwph.cfg \ |
---|
452 | --ww-extension-suffixfile=config/wwph.suffixes |
---|
453 | \end{verbatim} |
---|
454 | |
---|
455 | |
---|
456 | \section{Tutorial: Parameters for Watson and Woodruff PH Extensions \label{WWTutorialSec}} |
---|
457 | |
---|
458 | The parameters for the PH extensions in WWPHExtensions.py provide the user with considerable control over |
---|
459 | how and under what conditions variables are fixed during the PH algorithm execution. Often, some variables converge |
---|
460 | to consistent values during early iterations and can be fixed at these values without affecting quality very much and without |
---|
461 | affecting feasibility at all. Also, the algorithm may need to fix some variables during execution in order to break cycles. |
---|
462 | In both cases, guidance from the user concerning which classes of variables can and/or should be fixed under various circumstances can be very helpful. |
---|
463 | |
---|
464 | The overarching goal is to support industrial and government users who solve the same model repeatedly with different, but |
---|
465 | somewhat similar, data each time. In such settings, it behooves the modeler to consider tradeoffs between speed, solution quality |
---|
466 | and feasibility and create at least one good set of directives and parameters for the PH algorithm. In some cases, a user may want more than |
---|
467 | one set of directives and parameters depending on whether speed of execution or quality of solution are more important. Iteration zero is controlled separately because often the absence of the quadratic term results in faster solves for this |
---|
468 | iteration and fixing variables after the iteration has the maximum possible impact on speedup. |
---|
469 | |
---|
470 | In order to discuss these issues, we consider an example. |
---|
471 | |
---|
472 | \subsection{Sizes Example} |
---|
473 | |
---|
474 | The Sizes example is simple and small. In fact, the instances that we will consider are so small that there is really |
---|
475 | no need to use the PH algorithm since the extensive form can be solved directly in a few minutes. However, it serves as a |
---|
476 | vehicle for introducing the concepts. |
---|
477 | |
---|
478 | This description and formulation comes from the original paper by Jorjani, Scott and Woodruff \cite{sizes}. |
---|
479 | |
---|
480 | If demand constraints for the current period are |
---|
481 | based on firm orders but future demands are based on forecasts or |
---|
482 | conjecture, then they should not be treated in the same fashion. We |
---|
483 | must recognize that future demand constraints are stochastic. That is |
---|
484 | to say that they should be modeled as random variables. |
---|
485 | It may not be reasonable or useful to consider the entire demand probability |
---|
486 | distribution functions. It may not be reasonable because there may not be |
---|
487 | sufficient data to estimate an entire distribution. It may not be useful |
---|
488 | because the essence of the stochastics may be captured by specifying a |
---|
489 | small number of representative {\em scenarios}. We assume that scenarios |
---|
490 | are specified by giving a full set of random variable realizations and a |
---|
491 | corresponding probability. |
---|
492 | We index the scenario set, ${\cal L}$, by $\ell$ and |
---|
493 | refer to the probability of occurrence of $\ell$ (or, more accurately, a |
---|
494 | realization ``near'' scenario $\ell$) as $\Prl$. |
---|
495 | We refer to solution systems that satisfy constraints with probability one as |
---|
496 | {\em admissible}. |
---|
497 | |
---|
498 | In addition to modeling stochastics, we would like to model {\em |
---|
499 | recourse} as well. That is, we would like to model the ability of |
---|
500 | decision makers to make use of new information (e.g., orders) at the |
---|
501 | start of each planning period. We allow our solution vectors to |
---|
502 | depend on the scenario that is realized, but we do not want to assume |
---|
503 | prescience. We refer to a system of solution vectors as {\em |
---|
504 | implementable} if for all decision times $t$, the solution vector |
---|
505 | elements corresponding to period $1,\ldots,t$ are constant with |
---|
506 | respect to information that becomes available only after stage $t$. |
---|
507 | We refer to the set of implementable solutions as ${\cal N}_{\cal L}$. It is |
---|
508 | possible to require implementable solutions by adding {\em |
---|
509 | non-anticipatitivity constraints}, but we will instead make use of |
---|
510 | solution procedures that implicitly guarantee implementable solutions. |
---|
511 | |
---|
512 | In the stochastic, multi-period formulation that follows the objective |
---|
513 | is to minimize expected costs. We invoke the network equivalence |
---|
514 | given earlier to drop the explicit requirement that $\Vector{x}$ and |
---|
515 | $\Vector{y}$ be integers. Variables and data are subscripted with a |
---|
516 | period index $t$ that takes on values up to $T$. To model the idea |
---|
517 | that sleeves produced in one period can be used as-is or cut in |
---|
518 | subsequent periods, we use $x_{ijt}$ to indicate that sleeves of |
---|
519 | length index $i$ are to be used without cutting in period $t$ if $i=j$ |
---|
520 | and with cutting otherwise. The $\Vector{y}$ vector gives production |
---|
521 | quantities for each length in each period without regard to the period |
---|
522 | in which they will be used (and perhaps cut). The formulation is |
---|
523 | essentially an extension of ILP except that a capacity constraint |
---|
524 | must be added in the multiple period formulation. Holding costs could |
---|
525 | be added, but an additional subscript becomes necessary without the |
---|
526 | benefit of any additional insight. As an aside, note that the addition |
---|
527 | of holding costs would add a large number of continuous variables, but |
---|
528 | no new integers so the impact on computational performance would not |
---|
529 | be catastrophic. |
---|
530 | |
---|
531 | \[ |
---|
532 | \min\sum_{\ell\in{\cal L}}\Prl |
---|
533 | \sum_{t}^{T}\left[\sum_{i=1}^{N}\left(sz_{it\ell}+p_{i}y_{it\ell}\right) |
---|
534 | +r\sum_{j<i}x_{ijt\ell}\right] \;\;\;\;\mbox{(SMIP)} |
---|
535 | \] |
---|
536 | subject to |
---|
537 | \begin{eqnarray} |
---|
538 | \sum_{j \geq i}x_{ijt\ell} & \geq & d_{it\ell} |
---|
539 | \;\;\;\; \ell \in {\cal L}, \; i=1,\ldots,N, \; t=1,\ldots,T \label{Dconstr} \\ |
---|
540 | \sum_{t'\leq t}\left[\sum_{j \leq i}x_{ijt'\ell} - y_{it'\ell}\right] & \leq & 0 |
---|
541 | \;\;\;\; \ell \in {\cal L}, \; i=1,\ldots,N, \; t=1,\ldots,T \label{Pconstr} \\ |
---|
542 | y_{it\ell} - Mz_{it\ell} & \leq & 0 |
---|
543 | \;\;\;\; \ell \in {\cal L}, \;i=1,\ldots,N, \; t=1,\ldots,T \label{Mconstr} \\ |
---|
544 | \sum_{i=1}^{N}y_{it\ell} & \leq & c_{t\ell} |
---|
545 | \;\;\;\; \ell \in {\cal L}, \; t=1,\ldots,T \label{Cconstr} \\ |
---|
546 | z_{it\ell} & \in & \{0,1\} |
---|
547 | \;\;\;\; \ell \in {\cal L}, \; i=1,\ldots,N, \; t=1,\ldots,T \label{intconstr} \\ |
---|
548 | \Vector{x}, \Vector{y}, \Vector{z} & \in & {\cal N}_{\cal L} |
---|
549 | \end{eqnarray} |
---|
550 | |
---|
551 | Bear in mind that solution vector elements corresponding |
---|
552 | to periods two through $T$ are not actually intended for use, they |
---|
553 | are computed just to see the effect that period 1 (the current period) |
---|
554 | decision would have on future optimal behavior. At the start of period 2 |
---|
555 | -- or at any other time -- the decision maker would run the model again |
---|
556 | with updated demands for the current period and new scenario estimates. |
---|
557 | |
---|
558 | \subsection{ReferenceModel.py} |
---|
559 | |
---|
560 | Here is the single scenario reference model: |
---|
561 | |
---|
562 | {\small |
---|
563 | \begin{verbatim} |
---|
564 | # |
---|
565 | # This is the two-period version of the SIZES optimization model. |
---|
566 | # |
---|
567 | |
---|
568 | from coopr.pyomo import * |
---|
569 | |
---|
570 | # |
---|
571 | # Model |
---|
572 | # |
---|
573 | |
---|
574 | model = Model() |
---|
575 | |
---|
576 | # |
---|
577 | # Parameters |
---|
578 | # |
---|
579 | |
---|
580 | # the number of product sizes. |
---|
581 | model.NumSizes = Param(within=NonNegativeIntegers) |
---|
582 | |
---|
583 | def product_sizes_rule(model): |
---|
584 | ans = set() |
---|
585 | for i in range(1, model.NumSizes()+1): |
---|
586 | ans.add(i) |
---|
587 | return ans |
---|
588 | |
---|
589 | # the set of sizes, labeled 1 through NumSizes. |
---|
590 | model.ProductSizes = Set(initialize=product_sizes_rule) |
---|
591 | |
---|
592 | # the deterministic demands for product at each size. |
---|
593 | model.DemandsFirstStage = Param(model.ProductSizes, within=NonNegativeIntegers) |
---|
594 | model.DemandsSecondStage = Param(model.ProductSizes, within=NonNegativeIntegers) |
---|
595 | |
---|
596 | # the unit production cost at each size. |
---|
597 | model.UnitProductionCosts = Param(model.ProductSizes, within=NonNegativeReals) |
---|
598 | |
---|
599 | # the setup cost for producing any units of size i. |
---|
600 | model.SetupCosts = Param(model.ProductSizes, within=NonNegativeReals) |
---|
601 | |
---|
602 | # the unit penalty cost of meeting demand for size j with larger size i. |
---|
603 | model.UnitPenaltyCosts = Param(model.ProductSizes, within=NonNegativeReals) |
---|
604 | |
---|
605 | # the cost to reduce a unit i to a lower unit j. |
---|
606 | model.UnitReductionCost = Param(within=NonNegativeReals) |
---|
607 | |
---|
608 | # a cap on the overall production within any time stage. |
---|
609 | model.Capacity = Param(within=PositiveReals) |
---|
610 | |
---|
611 | # a derived set to constrain the NumUnitsCut variable domain. |
---|
612 | def num_units_cut_domain_rule(model): |
---|
613 | ans = set() |
---|
614 | for i in range(1,model.NumSizes()+1): |
---|
615 | for j in range(1, i+1): |
---|
616 | ans.add((i,j)) |
---|
617 | return ans |
---|
618 | |
---|
619 | model.NumUnitsCutDomain = Set(initialize=num_units_cut_domain_rule, dimen=2) |
---|
620 | |
---|
621 | # |
---|
622 | # Variables |
---|
623 | # |
---|
624 | |
---|
625 | # are any products at size i produced? |
---|
626 | model.ProduceSizeFirstStage = Var(model.ProductSizes, domain=Boolean) |
---|
627 | model.ProduceSizeSecondStage = Var(model.ProductSizes, domain=Boolean) |
---|
628 | |
---|
629 | # NOTE: The following (num-produced and num-cut) variables are implicitly integer |
---|
630 | # under the normal cost objective, but with the PH cost objective, this isn't |
---|
631 | # the case. |
---|
632 | |
---|
633 | # the number of units at each size produced. |
---|
634 | model.NumProducedFirstStage = Var(model.ProductSizes, domain=NonNegativeIntegers) |
---|
635 | model.NumProducedSecondStage = Var(model.ProductSizes, domain=NonNegativeIntegers) |
---|
636 | |
---|
637 | # the number of units of size i cut (down) to meet demand for units of size j. |
---|
638 | model.NumUnitsCutFirstStage = Var(model.NumUnitsCutDomain, domain=NonNegativeIntegers) |
---|
639 | model.NumUnitsCutSecondStage = Var(model.NumUnitsCutDomain, domain=NonNegativeIntegers) |
---|
640 | |
---|
641 | # stage-specific cost variables for use in the pysp scenario tree / analysis. |
---|
642 | model.FirstStageCost = Var(domain=NonNegativeReals) |
---|
643 | model.SecondStageCost = Var(domain=NonNegativeReals) |
---|
644 | |
---|
645 | # |
---|
646 | # Constraints |
---|
647 | # |
---|
648 | |
---|
649 | # ensure that demand is satisfied in each time stage, accounting for cut-downs. |
---|
650 | def demand_satisfied_first_stage_rule(i, model): |
---|
651 | return (0.0, \ |
---|
652 | sum(model.NumUnitsCutFirstStage[j,i] \ |
---|
653 | for j in model.ProductSizes if j >= i) - model.DemandsFirstStage[i], \ |
---|
654 | None) |
---|
655 | |
---|
656 | def demand_satisfied_second_stage_rule(i, model): |
---|
657 | return (0.0, \ |
---|
658 | sum(model.NumUnitsCutSecondStage[j,i] \ |
---|
659 | for j in model.ProductSizes if j >= i) - model.DemandsSecondStage[i], \ |
---|
660 | None) |
---|
661 | |
---|
662 | model.DemandSatisfiedFirstStage = \ |
---|
663 | Constraint(model.ProductSizes, rule=demand_satisfied_first_stage_rule) |
---|
664 | model.DemandSatisfiedSecondStage = \ |
---|
665 | Constraint(model.ProductSizes, rule=demand_satisfied_second_stage_rule) |
---|
666 | |
---|
667 | # ensure that you don't produce any units if the decision has been made to disable production. |
---|
668 | def enforce_production_first_stage_rule(i, model): |
---|
669 | # The production capacity per time stage serves as a simple upper bound for "M". |
---|
670 | return (None, \ |
---|
671 | model.NumProducedFirstStage[i] - model.Capacity * model.ProduceSizeFirstStage[i], \ |
---|
672 | 0.0) |
---|
673 | |
---|
674 | def enforce_production_second_stage_rule(i, model): |
---|
675 | # The production capacity per time stage serves as a simple upper bound for "M". |
---|
676 | return (None, \ |
---|
677 | model.NumProducedSecondStage[i] - model.Capacity * model.ProduceSizeSecondStage[i], \ |
---|
678 | 0.0) |
---|
679 | |
---|
680 | model.EnforceProductionBinaryFirstStage = \ |
---|
681 | Constraint(model.ProductSizes, rule=enforce_production_first_stage_rule) |
---|
682 | model.EnforceProductionBinarySecondStage = \ |
---|
683 | Constraint(model.ProductSizes, rule=enforce_production_second_stage_rule) |
---|
684 | |
---|
685 | # ensure that the production capacity is not exceeded for each time stage. |
---|
686 | def enforce_capacity_first_stage_rule(model): |
---|
687 | return (None, \ |
---|
688 | sum(model.NumProducedFirstStage[i] for i in model.ProductSizes) - model.Capacity, \ |
---|
689 | 0.0) |
---|
690 | |
---|
691 | def enforce_capacity_second_stage_rule(model): |
---|
692 | return (None, \ |
---|
693 | sum(model.NumProducedSecondStage[i] for i in model.ProductSizes) - model.Capacity, \ |
---|
694 | 0.0) |
---|
695 | |
---|
696 | model.EnforceCapacityLimitFirstStage = Constraint(rule=enforce_capacity_first_stage_rule) |
---|
697 | model.EnforceCapacityLimitSecondStage = Constraint(rule=enforce_capacity_second_stage_rule) |
---|
698 | |
---|
699 | # ensure that you can't generate inventory out of thin air. |
---|
700 | def enforce_inventory_first_stage_rule(i, model): |
---|
701 | return (None, \ |
---|
702 | sum(model.NumUnitsCutFirstStage[i,j] \ |
---|
703 | for j in model.ProductSizes if j <= i) - model.NumProducedFirstStage[i], \ |
---|
704 | 0.0) |
---|
705 | |
---|
706 | def enforce_inventory_second_stage_rule(i, model): |
---|
707 | return (None, \ |
---|
708 | sum(model.NumUnitsCutFirstStage[i,j] \ |
---|
709 | for j in model.ProductSizes \ |
---|
710 | if j <= i) + sum(model.NumUnitsCutSecondStage[i,j] \ |
---|
711 | for j in model.ProductSizes if j <= i) \ |
---|
712 | - model.NumProducedFirstStage[i] - model.NumProducedSecondStage[i], \ |
---|
713 | 0.0) |
---|
714 | |
---|
715 | model.EnforceInventoryFirstStage = \ |
---|
716 | Constraint(model.ProductSizes, rule=enforce_inventory_first_stage_rule) |
---|
717 | model.EnforceInventorySecondStage = \ |
---|
718 | Constraint(model.ProductSizes, rule=enforce_inventory_second_stage_rule) |
---|
719 | |
---|
720 | # stage-specific cost computations. |
---|
721 | def first_stage_cost_rule(model): |
---|
722 | production_costs = \ |
---|
723 | sum(model.SetupCosts[i] * model.ProduceSizeFirstStage[i] \ |
---|
724 | + model.UnitProductionCosts[i] * model.NumProducedFirstStage[i] \ |
---|
725 | for i in model.ProductSizes) |
---|
726 | cut_costs = \ |
---|
727 | sum(model.UnitReductionCost * model.NumUnitsCutFirstStage[i,j] \ |
---|
728 | for (i,j) in model.NumUnitsCutDomain if i != j) |
---|
729 | return (model.FirstStageCost - production_costs - cut_costs) == 0.0 |
---|
730 | |
---|
731 | model.ComputeFirstStageCost = Constraint(rule=first_stage_cost_rule) |
---|
732 | |
---|
733 | def second_stage_cost_rule(model): |
---|
734 | production_costs = \ |
---|
735 | sum(model.SetupCosts[i] * model.ProduceSizeSecondStage[i] \ |
---|
736 | + model.UnitProductionCosts[i] * model.NumProducedSecondStage[i] \ |
---|
737 | for i in model.ProductSizes) |
---|
738 | cut_costs = \ |
---|
739 | sum(model.UnitReductionCost * model.NumUnitsCutSecondStage[i,j] \ |
---|
740 | for (i,j) in model.NumUnitsCutDomain if i != j) |
---|
741 | return (model.SecondStageCost - production_costs - cut_costs) == 0.0 |
---|
742 | |
---|
743 | model.ComputeSecondStageCost = Constraint(rule=second_stage_cost_rule) |
---|
744 | |
---|
745 | # |
---|
746 | # Objective |
---|
747 | # |
---|
748 | |
---|
749 | def total_cost_rule(model): |
---|
750 | return (model.FirstStageCost + model.SecondStageCost) |
---|
751 | |
---|
752 | model.TotalCostObjective = Objective(rule = total_cost_rule, sense=minimize) |
---|
753 | \end{verbatim} |
---|
754 | } |
---|
755 | |
---|
756 | \subsection{ReferenceModel.dat} |
---|
757 | |
---|
758 | This file establishes the data for a representative instance. The main thing to notice is that the indexes for |
---|
759 | sizes happen to be given as integers, which is not required: they could have been strings. |
---|
760 | |
---|
761 | {\small |
---|
762 | \begin{verbatim} |
---|
763 | #ReferenceModel.dat |
---|
764 | param NumSizes := 10 ; |
---|
765 | |
---|
766 | param Capacity := 200000 ; |
---|
767 | |
---|
768 | param DemandsFirstStage := 1 2500 2 7500 3 12500 4 10000 5 35000 |
---|
769 | 6 25000 7 15000 8 12500 9 12500 10 5000 ; |
---|
770 | |
---|
771 | param DemandsSecondStage := 1 2500 2 7500 3 12500 4 10000 5 35000 |
---|
772 | 6 25000 7 15000 8 12500 9 12500 10 5000 ; |
---|
773 | |
---|
774 | param UnitProductionCosts := 1 0.748 2 0.7584 3 0.7688 4 0.7792 5 0.7896 |
---|
775 | 6 0.8 7 0.8104 8 0.8208 9 0.8312 10 0.8416 ; |
---|
776 | |
---|
777 | param SetupCosts := 1 453 2 453 3 453 4 453 5 453 6 453 7 453 8 453 9 453 10 453 ; |
---|
778 | |
---|
779 | param UnitReductionCost := 0.008 ; |
---|
780 | \end{verbatim} |
---|
781 | |
---|
782 | \subsection{ScenarioStucture.dat} |
---|
783 | |
---|
784 | Here is the data file that describes the stochastics: |
---|
785 | |
---|
786 | \begin{verbatim} |
---|
787 | # IMPORTANT - THE STAGES ARE ASSUMED TO BE IN TIME-ORDER. |
---|
788 | |
---|
789 | set Stages := FirstStage SecondStage ; |
---|
790 | |
---|
791 | set Nodes := RootNode |
---|
792 | Scenario1Node |
---|
793 | Scenario2Node |
---|
794 | Scenario3Node |
---|
795 | Scenario4Node |
---|
796 | Scenario5Node |
---|
797 | Scenario6Node |
---|
798 | Scenario7Node |
---|
799 | Scenario8Node |
---|
800 | Scenario9Node |
---|
801 | Scenario10Node ; |
---|
802 | |
---|
803 | param NodeStage := RootNode FirstStage |
---|
804 | Scenario1Node SecondStage |
---|
805 | Scenario2Node SecondStage |
---|
806 | Scenario3Node SecondStage |
---|
807 | Scenario4Node SecondStage |
---|
808 | Scenario5Node SecondStage |
---|
809 | Scenario6Node SecondStage |
---|
810 | Scenario7Node SecondStage |
---|
811 | Scenario8Node SecondStage |
---|
812 | Scenario9Node SecondStage |
---|
813 | Scenario10Node SecondStage ; |
---|
814 | |
---|
815 | set Children[RootNode] := Scenario1Node |
---|
816 | Scenario2Node |
---|
817 | Scenario3Node |
---|
818 | Scenario4Node |
---|
819 | Scenario5Node |
---|
820 | Scenario6Node |
---|
821 | Scenario7Node |
---|
822 | Scenario8Node |
---|
823 | Scenario9Node |
---|
824 | Scenario10Node ; |
---|
825 | |
---|
826 | param ConditionalProbability := RootNode 1.0 |
---|
827 | Scenario1Node 0.10 |
---|
828 | Scenario2Node 0.10 |
---|
829 | Scenario3Node 0.10 |
---|
830 | Scenario4Node 0.10 |
---|
831 | Scenario5Node 0.10 |
---|
832 | Scenario6Node 0.10 |
---|
833 | Scenario7Node 0.10 |
---|
834 | Scenario8Node 0.10 |
---|
835 | Scenario9Node 0.10 |
---|
836 | Scenario10Node 0.10 ; |
---|
837 | |
---|
838 | set Scenarios := Scenario1 |
---|
839 | Scenario2 |
---|
840 | Scenario3 |
---|
841 | Scenario4 |
---|
842 | Scenario5 |
---|
843 | Scenario6 |
---|
844 | Scenario7 |
---|
845 | Scenario8 |
---|
846 | Scenario9 |
---|
847 | Scenario10 ; |
---|
848 | |
---|
849 | param ScenarioLeafNode := Scenario1 Scenario1Node |
---|
850 | Scenario2 Scenario2Node |
---|
851 | Scenario3 Scenario3Node |
---|
852 | Scenario4 Scenario4Node |
---|
853 | Scenario5 Scenario5Node |
---|
854 | Scenario6 Scenario6Node |
---|
855 | Scenario7 Scenario7Node |
---|
856 | Scenario8 Scenario8Node |
---|
857 | Scenario9 Scenario9Node |
---|
858 | Scenario10 Scenario10Node ; |
---|
859 | |
---|
860 | set StageVariables[FirstStage] := ProduceSizeFirstStage[*] |
---|
861 | NumProducedFirstStage[*] |
---|
862 | NumUnitsCutFirstStage[*,*] ; |
---|
863 | set StageVariables[SecondStage] := ProduceSizeSecondStage[*] |
---|
864 | NumProducedSecondStage[*] |
---|
865 | NumUnitsCutSecondStage[*,*] ; |
---|
866 | |
---|
867 | param StageCostVariable := FirstStage FirstStageCost |
---|
868 | SecondStage SecondStageCost ; |
---|
869 | |
---|
870 | \end{verbatim} |
---|
871 | } |
---|
872 | |
---|
873 | \subsection{wwph.cfg} |
---|
874 | |
---|
875 | This file provides default values. Even though most of them will be overridden at the variable level, it makes sense |
---|
876 | to provide instance specific defaults. For this problem, it does not seem helpful or prudent to do any fixing of continuous |
---|
877 | variables since they only used to compute the objective function terms. The \verb|ProduceSizeFirstStage| variables are binary |
---|
878 | and fixing their values would seem to determine the value of the other values. The other first stage variables are general integers. The \verb|ProduceSizeFirstStage| can safely be fixed because provided that the largest size is not fixed at zero, there is little risk of infeasibility since larger sizes can be cut (at a cost) to meet demand for smaller sizes. Consequently, it is safe to let the algorithm slam those |
---|
879 | variables as needed. Slamming the other variables is riskier because they could get slammed to values that don't make |
---|
880 | sense given the values of the \verb|ProduceSizeFirstStage| variables. |
---|
881 | |
---|
882 | Fixing variables that have converged will speed the algorithm, perhaps resulting in a less desirable |
---|
883 | objective function value. It would make sense to tend to fix the \verb|ProduceSizeFirstStage| before fixing the others to avoid |
---|
884 | inconsistencies and because the The \verb|ProduceSizeFirstStage| variables have a strong tendency to imply values for the other |
---|
885 | variables. |
---|
886 | |
---|
887 | Here is a sensible and internally consistent, if a bit conservative, wwph.cfg file: |
---|
888 | |
---|
889 | {\small |
---|
890 | \begin{verbatim} |
---|
891 | # wwph.cfg |
---|
892 | # python commands that are executed by the wwphextensions.py file |
---|
893 | # to set parameters and parameter defaults |
---|
894 | |
---|
895 | self.fix_continuous_variables = False |
---|
896 | |
---|
897 | self.Iteration0MipGap = 0.02 |
---|
898 | self.InitialMipGap = 0.10 |
---|
899 | self.FinalMipGap = 0.001 |
---|
900 | |
---|
901 | # for all six of these, zero means don't do it. |
---|
902 | self.Iter0FixIfConvergedAtLB = 0 # 1 or 0 |
---|
903 | self.Iter0FixIfConvergedAtUB = 0 # 1 or 0 |
---|
904 | self.Iter0FixIfConvergedAtNB = 0 # 1 or 0 (converged to a non-bound) |
---|
905 | self.FixWhenItersConvergedAtLB = 0 |
---|
906 | self.FixWhenItersConvergedAtUB = 25 |
---|
907 | self.FixWhenItersConvergedAtNB = 0 # converged to a non-bound |
---|
908 | self.FixWhenItersConvergedContinuous = 0 |
---|
909 | |
---|
910 | # "default" slamming parms |
---|
911 | # True and False are the options (case sensitive) |
---|
912 | self.CanSlamToLB = False |
---|
913 | self.CanSlamToMin = False |
---|
914 | self.CanSlamToAnywhere = False |
---|
915 | self.CanSlamToMax = True |
---|
916 | self.CanSlamToUB = False |
---|
917 | self.PH_Iters_Between_Cycle_Slams = 5 |
---|
918 | |
---|
919 | # the next line will try to force at least one variable to be |
---|
920 | # fixed every other iteration after iteration 50 |
---|
921 | # if anything can be slammed |
---|
922 | self.SlamAfterIter = 50 |
---|
923 | |
---|
924 | self.hash_hit_len_to_slam = 50 |
---|
925 | self.W_hash_history_len = 100 |
---|
926 | \end{verbatim} |
---|
927 | } |
---|
928 | |
---|
929 | There are a number of things to notice about the contents of this file. Since it is executed |
---|
930 | as python code, the syntax matters. Users should changes values of numbers of change True to False, but |
---|
931 | everything else should not be edited with the exception of comments, which is any text after a sharp sign |
---|
932 | (sometimes called a pound sign). Changes to this file should be tested incrementally because errors are trapped by |
---|
933 | the python interpreter and may be difficult for non-python programmers to decipher. |
---|
934 | |
---|
935 | This particular example file allows variables to be fixed if all scenarios have agreed on the |
---|
936 | upper bound for five iterations in a row. Since the \verb|ProduceSizeFirstStage| variables are the only discrete |
---|
937 | variables in the first stage with an upper bound, they are the only variables affected by this. |
---|
938 | |
---|
939 | This example allows slamming, but only to the max across scenarios. This is different than the upper bound, even for |
---|
940 | binary variables, because a variable could be selected for slamming for which all scenarios have agreed on the same value |
---|
941 | (which could be the lower lower bound). Data providing an override for this default as well as control over the selection priority for variables to slam are provided in the wwph.suffixes file. |
---|
942 | |
---|
943 | \subsection{wwph.suffixes} |
---|
944 | |
---|
945 | {\small |
---|
946 | \begin{verbatim} |
---|
947 | # Optional suffixes to help control PH |
---|
948 | |
---|
949 | # Text triples specifying (variable/variable-index, suffix-name, suffix-value) tuples. |
---|
950 | |
---|
951 | # override the defaults given in wwph.cfg as needed |
---|
952 | # If no scenario needs the smallest size to be produced, then just forget it |
---|
953 | ProduceSizeFirstStage[1] Iter0FixIfConvergedAtLB 1 |
---|
954 | |
---|
955 | # if all scenarios need either of the two largest sizes, just lock them in |
---|
956 | ProduceSizeFirstStage[9] Iter0FixIfConvergedAtUB 1 |
---|
957 | ProduceSizeFirstStage[10] Iter0FixIfConvergedAtUB 1 |
---|
958 | |
---|
959 | # and be aggressive with the smallest size after iteration 0 |
---|
960 | ProduceSizeFirstStage[1] FixWhenItersConvergedAtLB 8 |
---|
961 | |
---|
962 | # if the production quantities have been fixed a long time, fix them |
---|
963 | NumProducedFirstStage[*] FixWhenItersConvergedAtNB 20 |
---|
964 | |
---|
965 | # don't allow slamming of variables other than ProduceSize |
---|
966 | # for completeness, disallow all slamming destinations |
---|
967 | |
---|
968 | NumProducedFirstStage[*] CanSlamToLB False |
---|
969 | NumProducedFirstStage[*] CanSlamToMin False |
---|
970 | NumProducedFirstStage[*] CanSlamToAnywhere False |
---|
971 | NumProducedFirstStage[*] CanSlamToMax False |
---|
972 | NumProducedFirstStage[*] CanSlamToUB False |
---|
973 | |
---|
974 | NumUnitsCutFirstStage[*,*] CanSlamToLB False |
---|
975 | NumUnitsCutFirstStage[*,*] CanSlamToMin False |
---|
976 | NumUnitsCutFirstStage[*,*] CanSlamToAnywhere False |
---|
977 | NumUnitsCutFirstStage[*,*] CanSlamToMax False |
---|
978 | NumUnitsCutFirstStage[*,*] CanSlamToUB False |
---|
979 | |
---|
980 | # higher priority means slam these first |
---|
981 | ProduceSizeFirstStage[1] SlammingPriority 1 |
---|
982 | ProduceSizeFirstStage[2] SlammingPriority 2 |
---|
983 | ProduceSizeFirstStage[3] SlammingPriority 3 |
---|
984 | ProduceSizeFirstStage[4] SlammingPriority 4 |
---|
985 | ProduceSizeFirstStage[5] SlammingPriority 5 |
---|
986 | ProduceSizeFirstStage[6] SlammingPriority 6 |
---|
987 | ProduceSizeFirstStage[7] SlammingPriority 7 |
---|
988 | ProduceSizeFirstStage[8] SlammingPriority 8 |
---|
989 | ProduceSizeFirstStage[9] SlammingPriority 9 |
---|
990 | ProduceSizeFirstStage[10] SlammingPriority 10 |
---|
991 | |
---|
992 | \end{verbatim} |
---|
993 | } |
---|
994 | |
---|
995 | The first thing to notice is that this is a data file and not a file of Python comments. |
---|
996 | Consequently, simpler messages are provided if there are errors, but the file can be verbose |
---|
997 | and a computer program or spreadsheet might be required to produce this data file. The next thing to notice |
---|
998 | is that indexes can be specified either using valid index values, or wildcards. |
---|
999 | |
---|
1000 | This file overrides the defaults to allow some fixing after iteration zero. Fixing the smallest |
---|
1001 | size (index 1) to zero cannot result in infeasibility because larger sizes can be cut to |
---|
1002 | satisfy demand for that size. Along the same lines, aggressively fixing the |
---|
1003 | production indicator for larger sizes to 1 is also |
---|
1004 | safe and perhaps not sub-optimal if all scenarios ``want'' those sizes anyway. |
---|
1005 | |
---|
1006 | \subsection{rhosetter.cfg} |
---|
1007 | |
---|
1008 | \begin{verbatim} |
---|
1009 | # rhosetter.cfg |
---|
1010 | |
---|
1011 | # users probably want this line intact so they can use model_instance |
---|
1012 | model_instance = self._model_instance |
---|
1013 | |
---|
1014 | MyRhoFactor = 0.1 |
---|
1015 | |
---|
1016 | for i in model_instance.ProductSizes: |
---|
1017 | self.setRhoAllScenarios(model_instance.ProduceSizeFirstStage[i], model_instance.SetupCosts[i] * MyRhoFactor) |
---|
1018 | self.setRhoAllScenarios(model_instance.NumProducedFirstStage[i], model_instance.UnitProductionCosts[i] * MyRhoFactor) |
---|
1019 | for j in model_instance.ProductSizes: |
---|
1020 | if j <= i: |
---|
1021 | self.setRhoAllScenarios(model_instance.NumUnitsCutFirstStage[i,j], model_instance.UnitReductionCost * MyRhoFactor) |
---|
1022 | |
---|
1023 | |
---|
1024 | \end{verbatim} |
---|
1025 | |
---|
1026 | \subsection{Execution} |
---|
1027 | |
---|
1028 | To run this example, connect to the sizes directory, which is something like: |
---|
1029 | \begin{verbatim} |
---|
1030 | coopr\examples\pysp\sizes |
---|
1031 | \end{verbatim} |
---|
1032 | |
---|
1033 | Then use |
---|
1034 | \begin{verbatim} |
---|
1035 | runph --model-directory=models --instance-directory=SIZES10 \ |
---|
1036 | --enable-ww-extensions --ww-extension-cfgfile=config/wwph.cfg \ |
---|
1037 | --ww-extension-suffixfile=config/wwph.suffixes \ |
---|
1038 | --rho-cfgfile=config/rhosetter.cfg |
---|
1039 | \end{verbatim} |
---|
1040 | |
---|
1041 | Since this instance is so small by modern standards, the enhancement are not needed and may even increase the total solution |
---|
1042 | time. It is provided to illustrate the features of the extensions, not to illustrate why you might need them. Much larger instances are are required for that. |
---|
1043 | |
---|
1044 | \section{Linearizing the Proximal Term \label{LinearSec}} |
---|
1045 | |
---|
1046 | \subsection{Introduction} |
---|
1047 | |
---|
1048 | For a decision variable $x$ the proximal term added to the objective function for each scenario subproblem at PH iteration $k$ is |
---|
1049 | $$ |
---|
1050 | \left(x - \bar{x}^{(k-1)}\right)^{2} |
---|
1051 | $$ |
---|
1052 | where $\bar{x}^{(k-1)}$ is the average over the scenarios from the last iteration. Expanding the square reveals that the only |
---|
1053 | quadratic term is $x^{2}$. For binary variables, this is equal to $x$, although this is not the case when a relaxation is solved. |
---|
1054 | For binary variables, the default behaviour is to replace the quadratic term with $x$, but an option allows the quadratic to be |
---|
1055 | retained because this can effect the subproblem solution time due to the use of the quadratic term when the relaxations are solved |
---|
1056 | as part of the branch and bound process. |
---|
1057 | |
---|
1058 | For non-binary variables an option exists to replace the $x^2$ term with a piece-wise approximation and the number of breakpoints in the |
---|
1059 | approximation is under user control. This can have a significant effect on CPU time required to solve sub-problems because the presence of |
---|
1060 | the quadratic term increases the solution times significantly. However, linearization results in a time-quality tradeoff because increasing |
---|
1061 | the number of breakpoints increases the fidelity but each breakpoint introduces another (unseen) binary variable so solution times are generally |
---|
1062 | increased. |
---|
1063 | |
---|
1064 | A few strategies for placing the breakpoints are supported as command a line options: |
---|
1065 | \verb|--breakpoint-strategy|=BREAKPOINT\_STRATEGY. A value of 0 |
---|
1066 | indicates uniform distribution. 1 indicates |
---|
1067 | breakpoints at the min and max for the node in the scenario tree, uniformly in- |
---|
1068 | between. A value of 2 indicates more aggressive concentration of |
---|
1069 | breakpoints near the observed node min/max. |
---|
1070 | |
---|
1071 | Upper and lower bounds for variables must be specified when the linearization option is chosen. This can be done by specifying bounds in the |
---|
1072 | reference model or by using the bounds-cfgfile command line option. It is, of course, best to use meaningful bounds provided |
---|
1073 | in the reference model; however, the modeller must be careful not use estimated bounds that are too tight since that could preclude |
---|
1074 | an optimal (or even a feasible) solution to the overall stochastic problem even though it might not cut off any optimal solutions for the |
---|
1075 | particular scenario. The use of a bounds cfgfile is an advanced topic, but enables the modeller to use bounds that cannot create |
---|
1076 | infeasibilities. |
---|
1077 | |
---|
1078 | \subsection{Bounds-cfgfile} |
---|
1079 | |
---|
1080 | Using a bounds cfgfile is an advanced topic. The modeler is writing python/pyomo code that will be executed by the |
---|
1081 | ph.py file that is the core of the PH algorithm. The first executable line in a bounds file is typically: |
---|
1082 | |
---|
1083 | \begin{verbatim} |
---|
1084 | model_instance = self._model_instance |
---|
1085 | \end{verbatim} |
---|
1086 | |
---|
1087 | This establishes a local variable called ``model\_instance'' that can be used to refer to the model (of course, a different name can be |
---|
1088 | used, such as MODINS). The object ``self'' on the right hand side of this assignment refers to the core PH object. The model, in turn contains the parameters and |
---|
1089 | variables defined in the ReferenceModel.py file that can be accessed by name. For example, with the Farmer model, the cfg file |
---|
1090 | sets the uppter bound on DevotedAcreage to be value of the paramter TOTAL\_ACREAGE at intialization (since this is Python, the |
---|
1091 | parentheses after TOTAL\_ACREAGE cause the value in TOTAL\_ACREAGE to be assigned rather than the name of the parameter object: |
---|
1092 | |
---|
1093 | \begin{verbatim} |
---|
1094 | # only need to set upper bounds on first-stage variables, i.e., those being blended. |
---|
1095 | |
---|
1096 | model_instance = self._model_instance |
---|
1097 | |
---|
1098 | # the values supplied to the method |
---|
1099 | upper_bound = float(model_instance.TOTAL_ACREAGE()) |
---|
1100 | |
---|
1101 | for index in model_instance.CROPS: |
---|
1102 | # the lower and upper bounds are expected to be floats, and trigger an exception if not. |
---|
1103 | self.setVariableBoundsAllScenarios("DevotedAcreage", index, 0.0, upper_bound) |
---|
1104 | \end{verbatim} |
---|
1105 | |
---|
1106 | The same thing could be accomplished by setting the upper bound in the model file, but it does serve as a simple example of a bounds cfg file. |
---|
1107 | |
---|
1108 | %\subsection{Sizes Example} |
---|
1109 | |
---|
1110 | |
---|
1111 | \section{Solving sub-problems in Parallel \label{parallelsec}} |
---|
1112 | |
---|
1113 | Pyomo makes use of the pyro facility to enable sub-problems to easily be assigned to be solved in parallel. This capability is suppored by pysp. We will refer to a single master computer |
---|
1114 | and multiple slave computers in the discussion here, but actually, the master computing processes can (and for synchronous parallel, should) be on a |
---|
1115 | processor that also runs a slave process. |
---|
1116 | |
---|
1117 | Here are the commands in order: |
---|
1118 | \begin{enumerate} |
---|
1119 | \item On the master: \verb|coopr-ns| |
---|
1120 | \item On the master: \verb|dispatch_srvr| |
---|
1121 | \item On each slave: \verb|pyro_mip_server| |
---|
1122 | \item On the master: \verb|runph ... --solver-manager=pyro ...| |
---|
1123 | \end{enumerate} |
---|
1124 | Note that the command \verb|coopr-ns| and the argument \verb|solver-manger| have a dash in the middle, while |
---|
1125 | the commands \verb|dispatch_srvr| and \verb|pyro_mip_server| have underscores. The first three commands launch processes |
---|
1126 | that have no internal mechanism for termination; i.e., they will be terminated only if they crash or if they are |
---|
1127 | killed by an external process. It is common to launch these processes with output redirection, such as \verb|coopr-ns >& cooprns.log|. The |
---|
1128 | \verb|runph| command is a normal runph command with the usual arguments with the additional specification that subproblem solves should |
---|
1129 | be directed to the pyro solver manager. |
---|