# Changeset 2938

Ignore:
Timestamp:
Aug 14, 2010 6:42:01 PM (9 years ago)
Message:

Merged revisions 2734-2937 via svnmerge from
https://software.sandia.gov/svn/public/coopr/coopr.pysp/trunk

........

r2772 | wehart | 2010-07-05 15:29:34 -0600 (Mon, 05 Jul 2010) | 3 lines

Eliminating the direct use of cProfile, which is not
backwards compatible to Python 2.4

........

r2781 | jwatson | 2010-07-09 09:24:27 -0600 (Fri, 09 Jul 2010) | 3 lines

In the PySP farmer example, changing "MeanYield?" to "Yield" - MeanYield? is obviously incorrect (it's just Yield), as I discovered while writing the PySP journal article.

........

r2790 | khunter | 2010-07-13 16:16:46 -0600 (Tue, 13 Jul 2010) | 3 lines

NFC: Remove some whitespace, so it's clear what the *actual* changes
are next commit.

........

r2791 | khunter | 2010-07-13 16:18:38 -0600 (Tue, 13 Jul 2010) | 3 lines

Error message: Automatic cast to string in case, fixing possible
ugly traceback from an already-known error point.

........

r2792 | khunter | 2010-07-13 16:20:59 -0600 (Tue, 13 Jul 2010) | 3 lines

Fix bug where pprint dies if stage names are non-strings.
Basically, cast 'em to a string.

........

r2800 | khunter | 2010-07-14 15:00:10 -0600 (Wed, 14 Jul 2010) | 3 lines

NFC: remove EOL whitespace so as not to obfuscate actual work in
next commit

........

r2801 | khunter | 2010-07-14 15:02:15 -0600 (Wed, 14 Jul 2010) | 3 lines

Allow for the possibility of numeric names. (Fix pprint
assumption.) Similar to r2792

........

r2808 | khunter | 2010-07-15 17:11:18 -0600 (Thu, 15 Jul 2010) | 3 lines

No wording or pdf output changes, but reformat LaTeX so that it's
easier to see what changes in next commit.

........

r2809 | khunter | 2010-07-15 17:13:02 -0600 (Thu, 15 Jul 2010) | 3 lines

Minor grammar fixes, and resolution of a word-wrap issue for a file
name.

........

r2810 | khunter | 2010-07-15 17:16:21 -0600 (Thu, 15 Jul 2010) | 4 lines

Found an almost helpful error message while working on my model;
figure I can't be the only one, so add an error message that asks a
question about a perhaps noob stepping stone issue.

........

r2815 | khunter | 2010-07-19 17:35:13 -0600 (Mon, 19 Jul 2010) | 2 lines

NFC: remove EOL whitespace ...

........

r2816 | khunter | 2010-07-19 17:37:49 -0600 (Mon, 19 Jul 2010) | 2 lines

Remove unnecessary variable set and bound check.

........

r2817 | khunter | 2010-07-19 17:39:27 -0600 (Mon, 19 Jul 2010) | 5 lines

Ref #4093 "PySP missing import"

Fix for --linearize-nonbinary-penalty-terms where stage cost
variable is indexed.

........

r2818 | khunter | 2010-07-19 17:54:53 -0600 (Mon, 19 Jul 2010) | 3 lines

Make use of Python functions as first-class citizens to clean up

........

r2832 | jwatson | 2010-07-21 13:07:47 -0600 (Wed, 21 Jul 2010) | 3 lines

Adding some enhanced verbosity to the ef writer to aid debug.

........

r2833 | jwatson | 2010-07-21 13:54:51 -0600 (Wed, 21 Jul 2010) | 3 lines

Fixes to pretty-print and solution computation in the PySP scenario tree object to deal with nodes in the same stage that have heterogeneous indices for the same variable name. Comes up in EPA sensor placement formulation. Requires some real thought - runs for now.

........

r2834 | jwatson | 2010-07-21 15:03:15 -0600 (Wed, 21 Jul 2010) | 3 lines

Various updates to PySP computeconf while debugging/exercising the main script.

........

r2835 | jwatson | 2010-07-21 17:29:53 -0600 (Wed, 21 Jul 2010) | 3 lines

More computeconf fixes.

........

r2836 | jwatson | 2010-07-22 08:07:16 -0600 (Thu, 22 Jul 2010) | 3 lines

Eliminating a legacy debug output unconditionally generated when compressing a scenario tree.

........

r2837 | jwatson | 2010-07-22 09:12:02 -0600 (Thu, 22 Jul 2010) | 3 lines

Various touch-ups to compute conf PySP script - it looks good to go!

........

r2838 | jwatson | 2010-07-22 13:03:51 -0600 (Thu, 22 Jul 2010) | 3 lines

Fixed bug with random seed initialization.

........

r2842 | jwatson | 2010-07-23 15:36:23 -0600 (Fri, 23 Jul 2010) | 3 lines

Adding --keep-solver-files option to the PySP runef script, to aid debugging.

........

r2852 | jwatson | 2010-07-23 21:36:51 -0600 (Fri, 23 Jul 2010) | 3 lines

Enforced the PySP convention that underscore characters not be part of scenario names. A ValueError? is now thrown if such a scenario name is encountered. It is better to do this up-front, as otherwise it causes parse problems when reading solutions later on.

........

r2917 | jwatson | 2010-08-10 17:04:15 -0600 (Tue, 10 Aug 2010) | 3 lines

Cleaning up the PySP confidence interval computation code, adding error checking, comments, etc.

........

r2918 | jwatson | 2010-08-10 18:55:47 -0600 (Tue, 10 Aug 2010) | 3 lines

More cleanup on the PySP confidence interval computation script.

........

r2919 | jwatson | 2010-08-10 19:54:36 -0600 (Tue, 10 Aug 2010) | 3 lines

Simplified the PySP confidence interval code, specifically using the extensive form library to leverage common routines.

........

r2923 | jwatson | 2010-08-13 15:36:17 -0600 (Fri, 13 Aug 2010) | 3 lines

Removing a redundant preprocess() method invocation for scenario instances created via node-based initialization in Progressive Hedging.

........

r2931 | wehart | 2010-08-14 15:54:35 -0600 (Sat, 14 Aug 2010) | 2 lines

Categorizing slow PySP tests as 'nightly'

........

Location:
coopr.pysp/stable/2.4
Files:
26 edited

Unmodified
Removed
• ## coopr.pysp/stable/2.4

• Property svnmerge-integrated changed /coopr.pysp/trunk merged: 2772,​2781,​2790-2792,​2800-2801,​2808-2810,​2815-2818,​2832-2838,​2842,​2852,​2917-2919,​2923,​2931
• ## coopr.pysp/stable/2.4/coopr/pysp/ef.py

 r2734 master_scenario_model = None master_scenario_instance = None if verbose_output: print "Constructing reference model and instance" try: from coopr.pysp.util.scenariomodels import scenario_tree_model if verbose_output: print "Constructing scenario tree instance" scenario_tree_instance = scenario_tree_model.create(instance_directory+os.sep+"ScenarioStructure.dat") # construct the scenario tree # if verbose_output: print "Constructing scenario tree object" scenario_tree = ScenarioTree(scenarioinstance=master_scenario_instance, scenariotreeinstance=scenario_tree_instance)
• ## coopr.pysp/stable/2.4/coopr/pysp/ef_writer_script.py

 r2734 import textwrap import traceback import cProfile try: import cProfile as profile except ImportError: import profile import pstats import gc default="serial") parser.add_option("--solver-options", help="Solver options for the extension form problem", help="Solver options for the extension form problem.", action="append", dest="solver_options", default=[]) parser.add_option("--mipgap", help="Specifies the mipgap for the EF solve", help="Specifies the mipgap for the EF solve.", action="store", dest="mipgap", default=None) parser.add_option("--output-solver-log", help="Output solver log during the extensive form solve", help="Output solver log during the extensive form solve.", action="store_true", dest="output_solver_log", default=False) parser.add_option("--keep-solver-files", help="Retain temporary input and output files for solve.", action="store_true", dest="keep_solver_files", default=False) parser.add_option("--profile", else: ef_solver.mipgap = options.mipgap if options.keep_solver_files is True: ef_solver.keepFiles = True ef_solver_manager = SolverManagerFactory(options.solver_manager_type) # tfile = pyutilib.services.TempfileManager.create_tempfile(suffix=".profile") tmp = cProfile.runctx('run_ef_writer(options,args)',globals(),locals(),tfile) tmp = profile.runctx('run_ef_writer(options,args)',globals(),locals(),tfile) p = pstats.Stats(tfile).strip_dirs() p.sort_stats('time', 'cum')
• ## coopr.pysp/stable/2.4/coopr/pysp/ph.py

 r2734 variable_name = variable_value.name variable_index = None new_rho_value = None if isinstance(rho_expression, float): self._solver_manager = None self._solver = None checkpoint_file = open(checkpoint_filename, "w") pickle.dump(self,checkpoint_file) print "Checkpoint written to file="+checkpoint_filename # # a simple utility to count the number of continuous and discrete variables in a set of instances. num_continuous_vars = 0 num_discrete_vars = 0 for stage in self._scenario_tree._stages[:-1]: # no blending over the final stage for index in variable_indices: is_used = True # until proven otherwise is_used = True # until proven otherwise for scenario in tree_node._scenarios: instance = self._instances[scenario._name] is_used = False if is_used is True: if is_used is True: if isinstance(variable.domain, IntegerSet) or isinstance(variable.domain, BooleanSet): num_fixed_continuous_vars = 0 num_fixed_discrete_vars = 0 for stage in self._scenario_tree._stages[:-1]: # no blending over the final stage for tree_node in stage._tree_nodes: # implicit assumption is that if a variable value is fixed in one # scenario, it is fixed in all scenarios. # scenario, it is fixed in all scenarios. is_fixed = False # until proven otherwise num_fixed_discrete_vars = num_fixed_discrete_vars + 1 else: num_fixed_continuous_vars = num_fixed_continuous_vars + 1 num_fixed_continuous_vars = num_fixed_continuous_vars + 1 return (num_fixed_discrete_vars, num_fixed_continuous_vars) # we end up (necessarily) "littering" the scenario instances with extra constraints. # these need to and should be cleaned up after PH, for purposes of post-PH manipulation, # e.g., writing the extensive form. # e.g., writing the extensive form. def _cleanup_scenario_instances(self): instance.preprocess() # create PH weight and xbar vectors, on a per-scenario basis, for each variable that is not in the # create PH weight and xbar vectors, on a per-scenario basis, for each variable that is not in the # final stage, i.e., for all variables that are being blended by PH. the parameters are created # in the space of each scenario instance, so that they can be directly and automatically num_values = 0 first_time = True first_stage = self._scenario_tree._stages[0] (cost_variable, cost_variable_index) = first_stage._cost_variable weights_to_transmit.append(getattr(scenario_instance, weight_parameter_name)) average_parameter_name = "PHAVG_"+variable_name averages_to_transmit.append(getattr(scenario_instance, average_parameter_name)) averages_to_transmit.append(getattr(scenario_instance, average_parameter_name)) self._solver_manager.transmit_weights_and_averages(scenario_instance, weights_to_transmit, averages_to_transmit) # a utility to transmit - across the PH solver manager - the current scenario # tree node statistics to each of my problem instances. done prior to each # PH iteration k. # PH iteration k. # tree_node_maximums[tree_node._name] = tree_node._maximums self._solver_manager.transmit_tree_node_statistics(scenario_instance, tree_node_minimums, tree_node_maximums) self._solver_manager.transmit_tree_node_statistics(scenario_instance, tree_node_minimums, tree_node_maximums) # self._solver_manager.enable_ph_objective(scenario_instance) """ Constructor self._report_solutions = False # do I report solutions after each PH iteration? self._report_weights = False # do I report PH weights prior to each PH iteration? self._report_only_statistics = False # do I report only variable statistics when outputting solutions and weights? self._report_only_statistics = False # do I report only variable statistics when outputting solutions and weights? self._output_continuous_variable_stats = True # when in verbose mode, do I output weights/averages for continuous variables? self._output_solver_results = False self._solver_type = "cplex" self._solver_manager_type = "serial" # serial or pyro are the options currently available self._solver = None self._solver_manager = None self._solver = None self._solver_manager = None self._keep_solver_files = False self._output_solver_log = False self._scenario_tree = None self._scenario_data_directory = "" # this the prefix for all scenario data self._instances = {} # maps scenario name to the corresponding model instance self._rho_setter = kwds[key] elif key == "bounds_setter": self._bounds_setter = kwds[key] self._bounds_setter = kwds[key] elif key == "solver": self._solver_type = kwds[key] scenario_solver_options = kwds[key] elif key == "scenario_mipgap": self._mipgap = kwds[key] self._mipgap = kwds[key] elif key == "keep_solver_files": self._keep_solver_files = kwds[key] self._report_weights = kwds[key] elif key == "report_only_statistics": self._report_only_statistics = kwds[key] self._report_only_statistics = kwds[key] elif key == "output_times": self._output_times = kwds[key] self._checkpoint_interval = kwds[key] elif key == "output_scenario_tree_solution": self._output_scenario_tree_solution = kwds[key] self._output_scenario_tree_solution = kwds[key] else: print "Unknown option=" + key + " specified in call to PH constructor" # validate the linearization (number of pieces) and breakpoint distribution parameters. if self._linearize_nonbinary_penalty_terms < 0: raise ValueError, "Value of linearization parameter for nonbinary penalty terms must be non-negative; value specified=" + self._linearize_nonbinary_penalty_terms raise ValueError, "Value of linearization parameter for nonbinary penalty terms must be non-negative; value specified=" + self._linearize_nonbinary_penalty_terms if self._breakpoint_strategy < 0: raise ValueError, "Value of the breakpoint distribution strategy parameter must be non-negative; value specified=" + str(self._breakpoint_strategy) if self._breakpoint_strategy > 3: raise ValueError, "Unknown breakpoint distribution strategy specified - valid values are between 0 and 2, inclusive; value specified=" + str(self._breakpoint_strategy) # validate rho setter file if specified. if self._rho_setter is not None: if self._bounds_setter is not None: if os.path.exists(self._bounds_setter) is False: raise ValueError, "The bounds setter script file="+self._bounds_setter+" does not exist" raise ValueError, "The bounds setter script file="+self._bounds_setter+" does not exist" # validate the checkpoint interval. # construct the sub-problem solver. if self._verbose is True: print "Constructing solver type="+self._solver_type print "Constructing solver type="+self._solver_type self._solver = SolverFactory(self._solver_type) if self._solver == None: self._iteration_index_set = Set(name="PHIterations") for i in range(0,self._max_iterations + 1): self._iteration_index_set.add(i) self._iteration_index_set.add(i) # spit out parameterization if verbosity is enabled print "   Rho initialization file=" + self._rho_setter if self._bounds_setter is not None: print "   Variable bounds initialization file=" + self._bounds_setter print "   Variable bounds initialization file=" + self._bounds_setter print "   Sub-problem solver type=" + self._solver_type print "   Solver manager type=" + self._solver_manager_type # construct the instances for each scenario. # # # garbage collection noticeably slows down PH when dealing with # large numbers of scenarios. disable prior to instance construction, re_enable_gc = gc.isenabled() gc.disable() if self._verbose is True: if self._scenario_tree._scenario_based_data == 1: else: print "Node-based instance initialization enabled" for scenario in self._scenario_tree._scenarios: # the scenario tree prior to creating PH-related parameters, # variables, and the like. for plugin in self._ph_plugins: plugin.post_instance_creation(self) for plugin in self._ph_plugins: plugin.post_instance_creation(self) # create ph-specific parameters (weights, xbar, etc.) for each instance. self._create_ph_scenario_parameters() # if specified, run the user script to initialize variable rhos at their whim. if self._rho_setter is not None: for tree_node in stage._tree_nodes: new_min_index = reference_variable._index new_min_index = reference_variable._index new_min_parameter_name = "NODEMIN_"+reference_variable.name # this bit of ugliness is due to Pyomo not correctly handling the Param construction for index in new_min_index: new_min_parameter[index] = 0.0 tree_node._minimums[reference_variable.name] = new_min_parameter new_avg_index = reference_variable._index tree_node._minimums[reference_variable.name] = new_min_parameter new_avg_index = reference_variable._index new_avg_parameter_name = "NODEAVG_"+reference_variable.name new_avg_parameter = None new_avg_parameter = Param(new_avg_index,name=new_avg_parameter_name) for index in new_avg_index: new_avg_parameter[index] = 0.0 new_avg_parameter[index] = 0.0 tree_node._averages[reference_variable.name] = new_avg_parameter new_max_index = reference_variable._index new_max_index = reference_variable._index new_max_parameter_name = "NODEMAX_"+reference_variable.name new_max_parameter = None new_max_parameter = Param(new_max_index,name=new_max_parameter_name) for index in new_max_index: new_max_parameter[index] = 0.0 new_max_parameter[index] = 0.0 tree_node._maximums[reference_variable.name] = new_max_parameter # the objective functions are modified throughout the course of PH iterations. # save the original, as a baseline to modify in subsequent iterations. reserve # the original objectives, for subsequent modification. # the original objectives, for subsequent modification. self._original_objective_expression = {} for instance_name, instance in self._instances.items(): """ Perform the non-weighted scenario solves and form the initial w and xbars. """ """ def iteration_0_solve(self): for scenario in self._scenario_tree._scenarios: instance = self._instances[scenario._name] if self._verbose is True: print "Queuing solve for scenario=" + scenario._name #       to the base scenario instances - and the variable values/etc. need to be collectged. instance.preprocess() # there's nothing to warm-start from in iteration 0, so don't include the keyword in the solve call. # the reason you don't want to include it is that some solvers don't know how to handle the keyword # the reason you don't want to include it is that some solvers don't know how to handle the keyword # at all (despite it being false). you might want to solve iteration 0 solves using some other solver. action_handle = self._solver_manager.wait_any() results = self._solver_manager.get_results(action_handle) results = self._solver_manager.get_results(action_handle) scenario_name = action_handle_scenario_map[action_handle] instance = self._instances[scenario_name] instance = self._instances[scenario_name] if self._verbose is True: print "Time loading results into instance="+str(end_time-start_time)+" seconds" if self._verbose is True: if self._verbose is True: print "Successfully loaded solution for scenario="+scenario_name num_results_so_far = num_results_so_far + 1 if self._verbose is True: print "Scenario sub-problem solves completed" if self._verbose is True: print "Scenario sub-problem solves completed" solve_end_time = time.time() start_time = time.time() # compute statistics over all stages, even the last. this is necessary in order to # successfully snapshot a scenario tree solution from the average values. for stage in self._scenario_tree._stages: for tree_node in stage._tree_nodes: for (variable, index_template, variable_indices) in stage._variables: variable_name = variable.name avg_parameter_name = "PHAVG_"+variable_name for index in variable_indices: min = float("inf") max = float("-inf") node_probability = 0.0 is_used = True # until proven otherwise is_used = True # until proven otherwise for scenario in tree_node._scenarios: instance = self._instances[scenario._name] if getattr(instance,variable_name)[index].status == VarStatus.unused: is_used = False else: else: node_probability += scenario._probability var_value = getattr(instance, variable.name)[index].value tree_node._minimums[variable.name][index] = min tree_node._averages[variable.name][index] = avg / node_probability tree_node._maximums[variable.name][index] = max tree_node._maximums[variable.name][index] = max # distribute the newly computed average to the xbar variable in # because the weight updates rely on the xbars, and the xbars are node-based, # I'm looping over the tree nodes and pushing weights into the corresponding scenarios. start_time = time.time() start_time = time.time() for stage in self._scenario_tree._stages[:-1]: # no blending over the final stage, so no weights to worry about. for tree_node in stage._tree_nodes: # the variable toward the mean, the weights will blow up and be huge by the # time that blending is activated. variable_blend_indicator = getattr(instance, blend_parameter_name)[index]() variable_blend_indicator = getattr(instance, blend_parameter_name)[index]() # get the weight and rho parameters for this variable/index combination. rho_value = getattr(instance, rho_parameter_name)[index]() current_variable_weight = getattr(instance, weight_parameter_name)[index]() # if I'm maximizing, invert value prior to adding (hack to implement negatives). # probably fixed in Pyomo at this point - I just haven't checked in a long while. if self._verbose is True: print "------------------------------------------------" print "------------------------------------------------" print "Starting PH iteration " + str(self._current_iteration) + " solves" # STEP 0: set up all global solver options. self._solver.mipgap = self._mipgap # STEP 1: queue up the solves for all scenario sub-problems and self._solver.mipgap = self._mipgap # STEP 1: queue up the solves for all scenario sub-problems and #         grab all of the action handles for the subsequent barrier sync. action_handle_scenario_map = {} # maps action handles to scenario names for scenario in self._scenario_tree._scenarios: for scenario in self._scenario_tree._scenarios: instance = self._instances[scenario._name] new_action_handle = self._solver_manager.queue(instance, opt=self._solver, warmstart=True, tee=self._output_solver_log) else: new_action_handle = self._solver_manager.queue(instance, opt=self._solver, tee=self._output_solver_log) new_action_handle = self._solver_manager.queue(instance, opt=self._solver, tee=self._output_solver_log) scenario_action_handle_map[scenario._name] = new_action_handle action_handle_scenario_map[new_action_handle] = scenario._name action_handle_scenario_map[new_action_handle] = scenario._name action_handles.append(new_action_handle) action_handle = self._solver_manager.wait_any() results = self._solver_manager.get_results(action_handle) results = self._solver_manager.get_results(action_handle) scenario_name = action_handle_scenario_map[action_handle] instance = self._instances[scenario_name] instance = self._instances[scenario_name] if self._verbose is True: print "Time loading results into instance="+str(end_time-start_time)+" seconds" if self._verbose is True: if self._verbose is True: print "Successfully loaded solution for scenario="+scenario_name num_results_so_far = num_results_so_far + 1 if self._verbose is True: print "Scenario sub-problem solves completed" if self._verbose is True: print "Scenario sub-problem solves completed" solve_end_time = time.time() # update variable statistics prior to any output. self.update_variable_statistics() self.update_variable_statistics() if (self._verbose is True) or (self._report_solutions is True): # let plugins know if they care. for plugin in self._ph_plugins: for plugin in self._ph_plugins: plugin.post_iteration_0_solves(self) # update the fixed variable statistics. (self._total_fixed_discrete_vars,self._total_fixed_continuous_vars) = self.compute_fixed_variable_counts() (self._total_fixed_discrete_vars,self._total_fixed_continuous_vars) = self.compute_fixed_variable_counts() if self._verbose is True: self._converger.update(self._current_iteration, self, self._scenario_tree, self._instances) first_stage_min, first_stage_avg, first_stage_max = self._extract_first_stage_cost_statistics() print "Convergence metric=%12.4f  First stage cost avg=%12.4f  Max-Min=%8.2f" % (self._converger.lastMetric(), first_stage_avg, first_stage_max-first_stage_min) print "Convergence metric=%12.4f  First stage cost avg=%12.4f  Max-Min=%8.2f" % (self._converger.lastMetric(), first_stage_avg, first_stage_max-first_stage_min) self.update_weights() # let plugins know if they care. for plugin in self._ph_plugins: for plugin in self._ph_plugins: plugin.post_iteration_0(self) for i in range(1, self._max_iterations+1): self._current_iteration = self._current_iteration + 1 print "Initiating PH iteration=" + self._current_iteration self._current_iteration = self._current_iteration + 1 print "Initiating PH iteration=" + self._current_iteration if (self._verbose is True) or (self._report_weights is True): # update variable statistics prior to any output. self.update_variable_statistics() if (self._verbose is True) or (self._report_solutions is True): print "Variable values following scenario solves:" # we don't technically have to do this at the last iteration, # but with checkpointing and re-starts, you're never sure # but with checkpointing and re-starts, you're never sure # when you're executing the last iteration. self.update_weights() # update the fixed variable statistics. (self._total_fixed_discrete_vars,self._total_fixed_continuous_vars) = self.compute_fixed_variable_counts() (self._total_fixed_discrete_vars,self._total_fixed_continuous_vars) = self.compute_fixed_variable_counts() if self._verbose is True: # check for early termination. self._converger.update(self._current_iteration, self, self._scenario_tree, self._instances) first_stage_min, first_stage_avg, first_stage_max = self._extract_first_stage_cost_statistics() print "Convergence metric=%12.4f  First stage cost avg=%12.4f  Max-Min=%8.2f" % (self._converger.lastMetric(), first_stage_avg, first_stage_max-first_stage_min) first_stage_min, first_stage_avg, first_stage_max = self._extract_first_stage_cost_statistics() print "Convergence metric=%12.4f  First stage cost avg=%12.4f  Max-Min=%8.2f" % (self._converger.lastMetric(), first_stage_avg, first_stage_max-first_stage_min) if self._converger.isConverged(self) is True: print "PH converged - convergence metric is below threshold="+str(self._converger._convergence_threshold) else: print "PH converged - convergence metric is below threshold="+str(self._converger._convergence_threshold)+" or all discrete variables are fixed" print "PH converged - convergence metric is below threshold="+str(self._converger._convergence_threshold)+" or all discrete variables are fixed" break if self._verbose is True: print "Number of discrete variables fixed before final plugin calls="+str(self._total_fixed_discrete_vars)+" (total="+str(self._total_discrete_vars)+")" print "Number of continuous variables fixed before final plugin calls="+str(self._total_fixed_continuous_vars)+" (total="+str(self._total_continuous_vars)+")" print "Number of continuous variables fixed before final plugin calls="+str(self._total_fixed_continuous_vars)+" (total="+str(self._total_continuous_vars)+")" # let plugins know if they care. do this before (self._total_fixed_discrete_vars,self._total_fixed_continuous_vars) = self.compute_fixed_variable_counts() self._solve_end_time = time.time() self._solve_end_time = time.time() print "PH complete" print "Final number of discrete variables fixed="+str(self._total_fixed_discrete_vars)+" (total="+str(self._total_discrete_vars)+")" print "Final number of continuous variables fixed="+str(self._total_fixed_continuous_vars)+" (total="+str(self._total_continuous_vars)+")" print "Final number of continuous variables fixed="+str(self._total_fixed_continuous_vars)+" (total="+str(self._total_continuous_vars)+")" print "Final variable values:" self.pprint(False, False, True, True, output_only_statistics=self._report_only_statistics) self.pprint(False, False, True, True, output_only_statistics=self._report_only_statistics) print "Final costs:" if self._output_continuous_variable_stats is False: variable_type = variable.domain variable_type = variable.domain if (isinstance(variable_type, IntegerSet) is False) and (isinstance(variable_type, BooleanSet) is False): return True # # pretty-prints the state of the current variable averages, weights, and values. if self._initialized is False: raise RuntimeError, "PH is not initialized - cannot invoke pprint() method" raise RuntimeError, "PH is not initialized - cannot invoke pprint() method" # print tree nodes and associated variable/xbar/ph information in stage-order # we don't blend in the last stage, so we don't current care about printing the associated information. for stage in self._scenario_tree._stages[:-1]: print "\tStage=" + stage._name print "\tStage=" + str(stage._name) num_outputs_this_stage = 0 # tracks the number of outputs on a per-index basis. num_outputs_this_variable = 0 # track, so we don't output the variable names unless there is an entry to report. for index in variable_indices: for index in variable_indices: weight_parameter_name = "PHWEIGHT_"+variable_name num_outputs_this_index = 0 # track, so we don't output the variable index more than once. for tree_node in stage._tree_nodes: for tree_node in stage._tree_nodes: # determine if the variable/index pair is used across the set of scenarios (technically, if variable_value.fixed is True: is_fixed = True # IMPT: this is far from obvious, but variables that are fixed will - because #       presolve will identify them as constants and eliminate them from all (fabs(maximum_value) > self._integer_tolerance): num_outputs_this_stage = num_outputs_this_stage + 1 num_outputs_this_stage = num_outputs_this_stage + 1 num_outputs_this_variable = num_outputs_this_variable + 1 num_outputs_this_index = num_outputs_this_index + 1 print "\t\t\t\tTree Node="+tree_node._name, if output_only_statistics is False: print "\t\t (Scenarios: ", print "\t\t (Scenarios: ", for scenario in tree_node._scenarios: print scenario._name," ", if scenario == tree_node._scenarios[-1]: print ")" if output_values is True: average_value = tree_node._averages[variable_name][index]() if output_only_statistics is False: print "\t\t\t\tValues: ", print "\t\t\t\tValues: ", for scenario in tree_node._scenarios: instance = self._instances[scenario._name] # is to avoid updating our regression test baseline output. print "    Min=%12.4f" % (minimum_value), print "    Avg=%12.4f" % (average_value), print "    Avg=%12.4f" % (average_value), print "    Max=%12.4f" % (maximum_value), else: print "\t\tCost Variable=" + cost_variable_name else: print "\t\tCost Variable=" + cost_variable_name + indexToString(cost_variable_index) print "\t\tCost Variable=" + cost_variable_name + indexToString(cost_variable_index) for tree_node in stage._tree_nodes: print "\t\t\tTree Node=" + tree_node._name, if output_only_statistics is True: print "    Min=%12.4f" % (minimum_value), print "    Avg=%12.4f" % (sum_values/num_values), print "    Avg=%12.4f" % (sum_values/num_values), print "    Max=%12.4f" % (maximum_value), else: print "    Avg=%12.4f" % (sum_values/num_values), print ""
• ## coopr.pysp/stable/2.4/coopr/pysp/phinit.py

 r2461 # for profiling import cProfile try: import cProfile as profile except ImportError: import profile import pstats # # Create the reference model / instance and scenario tree instance for PH. # IMPT: This method should be moved into a more generic module - it has nothing #       to do with PH, and is used elsewhere (by routines that shouldn't have #       to know about PH). # # validate the tree prior to doing anything serious # print "" if scenario_tree.validate() is False: print "***ERROR: Scenario tree is invalid****" if options.verbose is True: print "Scenario tree is valid!" print "" # # tfile = pyutilib.services.TempfileManager.create_tempfile(suffix=".profile") tmp = cProfile.runctx('exec_ph(options)',globals(),locals(),tfile) tmp = profile.runctx('exec_ph(options)',globals(),locals(),tfile) p = pstats.Stats(tfile).strip_dirs() p.sort_stats('time', 'cum')

• ## coopr.pysp/stable/2.4/coopr/pysp/phserver.py

 r2410 # for profiling import cProfile try: import cProfile as profile except ImportError: import profile import pstats # tfile = pyutilib.services.TempfileManager.create_tempfile(suffix=".profile") tmp = cProfile.runctx('exec_ph(options)',globals(),locals(),tfile) tmp = profile.runctx('exec_ph(options)',globals(),locals(),tfile) p = pstats.Stats(tfile).strip_dirs() p.sort_stats('time', 'cum')
• ## coopr.pysp/stable/2.4/coopr/pysp/phutils.py

 r2465 return return_index[0] else: return return_index return return_index # return_index = () for index in indices: transformed_index = int(index) except ValueError: transformed_index = index transformed_index = index return_index = return_index + (transformed_index,) # ditto with the index template. one-dimensional # indices in pyomo are not tuples, but anything # else is. # else is. if type(index) != tuple: else: print "Node-based instance initialization enabled" scenario = scenario_tree_instance.get_scenario(scenario_name) scenario_instance = None scenario_data.read(model=scenario_instance) scenario_instance.load(scenario_data) scenario_instance.preprocess() except: print "Encountered exception in model instance creation - traceback:" def create_ph_parameters(instance, scenario_tree, default_rho, linearizing_penalty_terms): new_penalty_variable_names = [] new_penalty_variable_names = [] # first, gather all unique variables referenced in any stage # other than the last, independent of specific indices. this # isn't an issue now, but it could easily become one (esp. in avoiding deep copies). instance_variables = {} for stage in scenario_tree._stages[:-1]: for (reference_variable, index_template, reference_indices) in stage._variables: if reference_variable.name not in instance_variables.keys(): instance_variables[reference_variable.name] = reference_variable # PH AVG new_avg_index = reference_variable._index new_avg_index = reference_variable._index new_avg_parameter_name = "PHAVG_"+reference_variable.name new_avg_parameter = None # PH RHO new_rho_index = reference_variable._index new_rho_index = reference_variable._index new_rho_parameter_name = "PHRHO_"+reference_variable.name new_rho_parameter = None new_penalty_term_variable = None if (len(new_avg_index) is 1) and (None in new_avg_index): new_penalty_term_variable = Var(name=new_penalty_term_variable_name, bounds=(0.0,None)) new_penalty_term_variable = Var(name=new_penalty_term_variable_name, bounds=(0.0,None)) else: new_penalty_term_variable = Var(new_penalty_term_variable_index, name=new_penalty_term_variable_name, bounds=(0.0,None)) new_penalty_variable_names.append(new_penalty_term_variable_name) # BINARY INDICATOR PARAMETER FOR WHETHER SPECIFIC VARIABLES ARE BLENDED. FOR ADVANCED USERS ONLY. # BINARY INDICATOR PARAMETER FOR WHETHER SPECIFIC VARIABLES ARE BLENDED. FOR ADVANCED USERS ONLY. # also controls whether weight updates proceed at any iteration. new_blend_parameter[index] = 1 return new_penalty_variable_names return new_penalty_variable_names
• ## coopr.pysp/stable/2.4/coopr/pysp/scenariotree.py

 r2461 # # initialize the _solutions attribute of a tree node. # initialize the _solutions attribute of a tree node. # #       construct unless we're actually going to use! # for each variable referenced in the stage, clone the variable # for purposes of storing solutions. we are being wasteful in # for purposes of storing solutions. we are being wasteful in # terms copying indices that may not be referenced in the stage. # this is something that we might revisit if space/performance # here are computed elsewhere - and that entity is responsible for # ensuring feasibility. this also leaves room for specifying infeasible # or partial solutions. # or partial solutions. new_variable_index = variable._index new_variable_name = variable.name self._solutions_initialized = True """ Constructor # statistic computed over all scenarios for that node. the # parameters are named as the source variable name suffixed # by one of: "NODEMIN", "NODEAVG", and "NODEMAX". # by one of: "NODEMIN", "NODEAVG", and "NODEMAX". # NOTE: the averages are probability_weighted - the min/max #       values are not. #       convention is assumed to be enforced by whoever populates #       these parameters. self._averages = {} self._averages = {} self._minimums = {} self._maximums = {} if self._solutions_initialized is False: self._initialize_solutions() self._initialize_solutions() for variable_name, variable in self._solutions.items(): except: raise RuntimeError, "No averages parameter present on tree node="+self._name+" for variable="+variable_name for index in variable._index: if variable[index].active is True: if self._solutions_initialized is False: self._initialize_solutions() self._initialize_solutions() for variable_name, variable in self._solutions.items(): node_probability = 0.0 avg = 0.0 num_scenarios_with_index = 0 for scenario in self._scenarios: scenario_instance = scenario_instance_map[scenario._name] node_probability += scenario._probability var_value = getattr(scenario_instance, variable.name)[index].value avg += (scenario._probability * var_value) variable[index].value = avg / node_probability scenario_variable = getattr(scenario_instance, variable.name) if index in scenario_variable: num_scenarios_with_index = num_scenarios_with_index + 1 var_value = getattr(scenario_instance, variable.name)[index].value avg += (scenario._probability * var_value) if num_scenarios_with_index > 0: variable[index].value = avg / node_probability # # a utility to compute the cost of the current node plus the expected costs of child nodes. if variable_name not in self._reference_instance.active_components(Var): raise ValueError, "Variable=" + variable_name + " associated with stage=" + stage_id + " is not present in model=" + self._reference_instance.name variable = self._reference_instance.active_components(Var)[variable_name] variable = self._reference_instance.active_components(Var)[variable_name] # extract all "real", i.e., fully specified, indices matching the index template. match_indices = extractVariableIndices(variable, index_template) match_indices = extractVariableIndices(variable, index_template) # there is a possibility that no indices match the input template. if len(match_indices) == 0: raise RuntimeError, "No indices match template="+str(index_template)+" for variable="+variable_name+" ; encountered in scenario tree specification for model="+self._reference_instance.name stage._variables.append((variable, index_template, match_indices)) # match template (e.g., "foo[*,*]") instead of just the variable # name (e.g., "foo") to represent the set of all indices. # if the variable is a singleton - that is, non-indexed - no brackets is fine. # we'll just tag the var[None] variable value with the (suffix,value) pair. if None not in variable._index: raise RuntimeError, "Variable="+variable_string+" is an indexed variable, and templates must specify an index match; encountered in scenario tree specification for model="+self._reference_instance.name raise RuntimeError, "Variable="+variable_string+" is an indexed variable, and templates must specify an index match; encountered in scenario tree specification for model="+self._reference_instance.name match_indices = [] raise ValueError, "Unknown stage=" + stage_id + " specified in scenario tree constructor (stage->cost variable map)" stage = self._stage_map[stage_id] cost_variable_string = stage_cost_variable_ids[stage_id].value # de-reference is required to access the parameter value if cost_variable_name not in self._reference_instance.active_components(Var): raise ValueError, "Variable=" + cost_variable_name + " associated with stage=" + stage_id + " is not present in model=" + self._reference_instance.name cost_variable = self._reference_instance.active_components(Var)[cost_variable_name] cost_variable = self._reference_instance.active_components(Var)[cost_variable_name] # extract all "real", i.e., fully specified, indices matching the index template. # only one index can be supplied for a stage cost variable. if len(match_indices) != 1: raise RuntimeError, "Only one index can be specified for a stage cost variable - "+str(len(match_indices))+"match template="+index_template+" for variable="+cost_variable_name+" ; encountered in scenario tree specification for model="+self._reference_instance.name if len(match_indices) > 1: msg = 'Only one index can be specified for a stage cost '     \ 'variable - %s match template "%s" for variable "%s" ;' \ ' encountered in scenario tree specification for model' \ ' "%s"' raise RuntimeError, msg % ( len(match_indices), index_template, cost_variable_name, self._reference_instance.name ) elif len(match_indices) == 0: msg = 'Stage cost index not found: %s[%s]\n'                  \ 'Do you have an off-by-one miscalculation, or did you ' \ 'forget to specify it in ReferenceModel.dat?' raise RuntimeError, msg % ( cost_variable_name, index_template ) cost_variable_index = match_indices[0] raise ValueError, "Cost variable=" + cost_variable_name + " associated with stage=" + stage_id + " is not present in model=" + self._reference_instance.name cost_variable = self._reference_instance.active_components(Var)[cost_variable_name] # store the validated info. stage._cost_variable = (cost_variable, cost_variable_index) scenariotreeinstance - the pyomo model specifying all scenario tree (text) data. scenariobundlelist   - a list of scenario names to retain, i.e., cull the rest to create a reduced tree! """ """ def __init__(self, *args, **kwds): self._name = None # TBD - some arbitrary identifier self._reference_instance = kwds[key] elif key == "scenariotreeinstance": scenario_tree_instance = kwds[key] scenario_tree_instance = kwds[key] elif key == "scenariobundlelist": scenario_bundle_list = kwds[key] scenario_bundle_list = kwds[key] else: print "Unknown option=" + key + " specified in call to ScenarioTree constructor" raise ValueError, "An ordered set of stage IDs must be supplied in the ScenarioTree constructor" # # # construct the actual tree objects # raise ValueError, "Unknown stage=" + stage_name + " assigned to tree node=" + tree_node._name new_tree_node = ScenarioTreeNode(tree_node_name, node_probability_map[tree_node_name].value, new_tree_node = ScenarioTreeNode(tree_node_name, node_probability_map[tree_node_name].value, self._stage_map[stage_name], self._reference_instance) # two-pass logic necessary when constructing scenarios. for scenario_name in scenario_ids: # IMPT: the name of the scenario is assumed to have no '_' (underscore) characters in the identifier. #       this is required when writing the extensive form, e.g., the scenario is used in the extensive #       form as a prefix on variable and constraint names. this convention simplifies parsing on the #       back end; if the underscore isn't used as a reserved character, then some other separator #       symbol would be required, or we would have to engage in some complex prefix matching with #       all possible scenario names. if string.find(scenario_name, "_") != -1: raise ValueError, "By convention, scenario names in PySP cannot contain underscore (_) characters; the scenario in violation="+scenario_name new_scenario = Scenario() new_scenario._name=scenario_name # active scenario tree components and compress the tree. if scenario_bundle_list is not None: print "Compressing scenario tree!" self.compress(scenario_bundle_list) # # utility for compressing or culling a scenario tree based on # a provided list of scenarios (specified by name) to retain - # a provided list of scenarios (specified by name) to retain - # all non-referenced components are eliminated. this particular # method compresses *in-place*, i.e., via direct modification scenario.retain = True # chase all nodes comprising this scenario, # chase all nodes comprising this scenario, # marking them for retention. for node in scenario._node_list: pass else: scenarios_to_delete.append(scenario) scenarios_to_delete.append(scenario) del self._scenario_map[scenario._name] print "There are no scenarios associated with tree node=" + tree_node._name return False return True for tree_node in self._tree_nodes: tree_node.snapshotSolutionFromInstances(scenario_instance_map) tree_node.snapshotSolutionFromInstances(scenario_instance_map) # else: print "Model=" + "Unassigned" print "----------------------------------------------------" print "----------------------------------------------------" print "Tree Nodes:" print "" print "\tName=" + tree_node_name if tree_node._stage is not None: print "\tStage=" + tree_node._stage._name print "\tStage=" + str(tree_node._stage._name) else: print "\t Stage=None" for stage_name in sorted(self._stage_map.keys()): stage = self._stage_map[stage_name] print "\tName=" + stage_name print "\tName=" + str(stage_name) print "\tTree Nodes: " for tree_node in sorted(stage._tree_nodes, cmp=lambda x,y: cmp(x._name, y._name)): else: print "\t\t",variable.name,":",index_template print "\tCost Variable: " print "\tCost Variable: " if stage._cost_variable[1] is None: print "\t\t" + stage._cost_variable[0].name else: print "\t\t" + stage._cost_variable[0].name + indexToString(stage._cost_variable[1]) print "" print "" print "----------------------------------------------------" print "Scenarios:" for tree_node in scenario._node_list: print "\t\t" + tree_node._name print "" print "" print "----------------------------------------------------" def pprintSolution(self, epsilon=1.0e-5): print "----------------------------------------------------" print "----------------------------------------------------" print "Tree Nodes:" print "" else: for index in indices: if solution_variable[index].active is True: if (solution_variable[index].active is True) and (index in solution_variable): value = solution_variable[index]() if fabs(value) > epsilon: if (value is not None) and (fabs(value) > epsilon): print "\t\t"+variable.name+indexToString(index)+"="+str(value) print "" print "" # print "Model=" + "Unassigned" print "----------------------------------------------------" print "----------------------------------------------------" print "Tree Nodes:" print "" print "\tTotal scenario cost=%10.4f" % aggregate_cost print "" print "----------------------------------------------------" print "----------------------------------------------------"
• ## coopr.pysp/stable/2.4/coopr/pysp/tests/unit/test_ph.py

 r2423 # Define a testing class, using the unittest.TestCase class. # # @unittest.category('nightly') class TestPH(unittest.TestCase): self.cleanup() TestPH = unittest.category('nightly')(TestPH) if __name__ == "__main__": unittest.main()
• ## coopr.pysp/stable/2.4/doc/pysp/pyspbody.tex

 r2162 \section{Overview} The pysp package extends the pyomo modeling language to support multi-stage stochastic programs with enumerated scenarios. Pyomo and pysp are Python version 2.6 programs.  In order to specify a program, the user must provide a reference model and a scenario tree. Provided and the necessary paths have been communicated to the operating system, the command to execute the pysp package is of the form: The pysp package extends the pyomo modeling language to support multi-stage stochastic programs with enumerated scenarios. Pyomo and pysp are Python version 2.6 programs. In order to specify a program, the user must provide a reference model and a scenario tree. Provided and the necessary paths have been communicated to the operating system, the command to execute the pysp package is of the form: \begin{verbatim} runph \end{verbatim} It is possible, and generally necessary, to provide command line arguments. The simplest argument causes the program to output help text: It is possible, and generally necessary, to provide command line arguments. The simplest argument causes the program to output help text: \begin{verbatim} runph --help \end{verbatim} but notice that there are two dashes before the word help.'' Command line arguments are summarized in Section~\ref{cmdargsec}. The underlying algorithm in pysp is based on Progressive Hedging (PH) \cite{RockafellarWets}, which decomposes the problem into sub-problems, one for each scenario. The algorithm progressively computes {\em weights} corresponding to each variable to force convergence and also makes use of a {\em proximal} term that provides a penalty for the squared deviation from the mean solution from the last PH iteration. but notice that there are two dashes before the word help.'' Command line arguments are summarized in Section~\ref{cmdargsec}. The underlying algorithm in pysp is based on Progressive Hedging (PH) \cite{RockafellarWets}, which decomposes the problem into sub-problems, one for each scenario. The algorithm progressively computes {\em weights} corresponding to each variable to force convergence and also makes use of a {\em proximal} term that provides a penalty for the squared deviation from the mean solution from the last PH iteration. \subsection{Reference Model} The reference model describes the problem for a canonical scenario. It does not make use of, or describe, a scenario index or any information about uncertainty. Typically, it is just the model that would be used if there were only a single scenario. It is given as a pyomo file. Data from an arbitrary scenario is needed to instantiate. The objective function needs to be separated by stages. The term for each stage should be assigned'' (i.e., constrained to be equal to) a variable. These variables names are reported in ScenarioStructure.dat so that they can be used for reporting purposes. The reference model describes the problem for a canonical scenario. It does not make use of, or describe, a scenario index or any information about uncertainty. Typically, it is just the model that would be used if there were only a single scenario. It is given as a pyomo file. Data from an arbitrary scenario is needed to instantiate. The objective function needs to be separated by stages. The term for each stage should be assigned'' (i.e., constrained to be equal to) a variable. These variable names are reported in ScenarioStructure.dat so that they can be used for reporting purposes. \subsection{Scenario Tree} The scenario tree provides information about the time stages and the nature of the uncertainties. In order to specify a tree, we must indicate the time stages at which information becomes available. We also specify the nodes of a tree to indicate which variables are associated with which realization at each stage. The data for each scenario is provided in separate data files, one for each scenario. The scenario tree provides information about the time stages and the nature of the uncertainties. In order to specify a tree, we must indicate the time stages at which information becomes available. We also specify the nodes of a tree to indicate which variables are associated with which realization at each stage. The data for each scenario is provided in separate data files, one for each scenario. \section{File Structure} \end{itemize} In this list we use Sname'' as the generic scenario name. The file scenariostructure.dat gives the names of all the scenarios and for each scenario there is a data file with the same name and the suffix .dat'' that contains the full specification of data for the scenario. In this list we use Sname'' as the generic scenario name. The file \verb|ScenarioStructure.dat| gives the names of all the scenarios and for each scenario there is a data file with the same name and the suffix .dat'' that contains the full specification of data for the scenario. \subsection{ScenarioStructure.dat} \begin{itemize} \item set Scenarios: List of the names of the scenarios. These names will subsequently be used as indexes in this data file and these names will also be used as the root file names for the scenario data files (each of these will have a .dat extension) if the parameter ScenarioBasedData is set to True, which is the default. \item[] item set Stages: List of the names of the time stages, which must be given in time order. In the sequel we will use {\sc StageName} to represent a node name used as an index. \item[] \item set Nodes: List of the names of the nodes in the scenario tree. In the sequel we will use {\sc NodeName} to represent a node name used as an index. \item[] \item param NodeStage: A list of pairs of nodes and stages to indicate the stage for each node. \item[] \item param Parent: A list of node pairs to indicate the parent of each node that has a parent (the root node will not be listed). \item[] \item set Children[{\sc NodeName}]: For each node that has children, provide the list of children. No sets will be give for leaf nodes. \item[] \item param ConditionalProbability: For each node in the scenario tree, give the conditional probability. For the root node it must be given as 1 and for the children of any node with children, the conditional probabilities must sum to 1. \item[] \item param ScenarioLeafNode: A list of scenario and node pairs to indicate the leaf node for each scenario. \item[] \item set StageVariables[{\sc StageName}]: For each stage, list the pyomo model variables associated with that stage. \item set Scenarios: List of the names of the scenarios. These names will subsequently be used as indices in this data file and these names will also be used as the root file names for the scenario data files (each of these will have a .dat extension) if the parameter ScenarioBasedData is set to True, which is the default. \item set Stages: List of the names of the time stages, which must be given in time order. In the sequel we will use {\sc StageName} to represent a node name used as an index. \item set Nodes: List of the names of the nodes in the scenario tree. In the sequel we will use {\sc NodeName} to represent a node name used as an index. \item param NodeStage: A list of pairs of nodes and stages to indicate the stage for each node. \item param Parent: A list of node pairs to indicate the parent of each node that has a parent (the root node will not be listed). \item set Children[{\sc NodeName}]: For each node that has children, provide the list of children. No sets will be give for leaf nodes. \item param ConditionalProbability: For each node in the scenario tree, give the conditional probability. For the root node it must be given as 1 and for the children of any node with children, the conditional probabilities must sum to 1. \item param ScenarioLeafNode: A list of scenario and node pairs to indicate the leaf node for each scenario. \item set StageVariables[{\sc StageName}]: For each stage, list the pyomo model variables associated with that stage. \end{itemize} Data to instantiate these sets and parameters is provided by users in the file ScenarioStructure.dat, which can be given in AMPL \cite{ampl} format. The default behavior is one file per scenario and each file has the full data for the scenario. An alternative is to specify just the data that changes from the root node in one file per tree node. To select this option, add the following line to ScenarioStructure.dat: Data to instantiate these sets and parameters is provided by users in the file ScenarioStructure.dat, which can be given in AMPL \cite{ampl} format. The default behavior is one file per scenario and each file has the full data for the scenario. An alternative is to specify just the data that changes from the root node in one file per tree node. To select this option, add the following line to ScenarioStructure.dat: \verb|param ScenarioBasedData := False ;| This will set it up to want a per-node file, something along the lines of what's in \verb|examples/pysp/farmer/NODEDATA|. Advanced users may be interested in seeing the file \verb|coopr/pysp/utils/scenariomodels.py|, which defines the python sets and parameters needed to describe stochastic elements. This file should not be edited. This will set it up to want a per-node file, something along the lines of what's in \verb|examples/pysp/farmer/NODEDATA|. Advanced users may be interested in seeing the file \verb|coopr/pysp/utils/scenariomodels.py|, which defines the python sets and parameters needed to describe stochastic elements. This file should not be edited. \section{Command Line Arguments \label{cmdargsec}} The basic PH algorithm is controlled by parameters that are set as command line arguments. Note that options begin with a double dash. The basic PH algorithm is controlled by parameters that are set as command line arguments. Note that options begin with a double dash. \begin{itemize} \item \verb|-h|, \verb|--help|\\            Show help message and exit. \item \verb|--verbose|\\             Generate verbose output for both initialization and execution. Default is False. \item \verb|--report-solutions|\\     Always report PH solutions after each iteration. Enabled if --verbose is enabled. Default is False. \item \verb|--report-weights|\\    Always report PH weights prior to each iteration. Enabled if --verbose is enabled. Default is False. \item \verb|--model-directory|=MODEL\_DIRECTORY\\ The directory in which all model (reference and scenario) definitions are stored. I.e., the .py'' files. Default is ".". \item \verb|--instance-directory|=INSTANCE\_DIRECTORY\\ The directory in which all instance (reference and scenario) definitions are stored. I.e., the .dat'' files. Default is ".". \item \verb|--solver|=SOLVER\_TYPE\\  The type of solver used to solve scenario sub-problems. Default is cplex. \item \verb|--solver-manager|=SOLVER\_MANAGER\_TYPE\\ The type of solver manager used to coordinate scenario sub-problem solves. Default is serial. This option is changed in parallel applications as described in Section~\ref{parallelsec}. \item \verb|--max-iterations|=MAX\_ITERATIONS\\ The maximal number of PH iterations. Default is 100. \item \verb|--default-rho|=DEFAULT\_RHO\\ The default (global) rho for all blended variables. Default is 1. \item \verb|--rho-cfgfile|=RHO\_CFGFILE\\ The name of a configuration script to compute PH rho values. Default is None. \item \verb|--enable-termdiff|-convergence\\ Terminate PH based on the termdiff convergence metric. The convergcne metric is the unscaled sum of differences between variable values and the mean. Default is True. \item \verb|--enable-normalized|-termdiff-convergence\\ Terminate PH based on the normalized termdiff convergence metric. Each term in the termdiff sum is normalized by the average value (NOTE: it is NOT normalized by the number of scenarios). Default is False. \item \verb|--termdiff-threshold|=TERMDIFF\_THRESHOLD\\ The convergence threshold used in the term-diff and normalized term-diff convergence criteria. Default is 0.01, which is too low for most problems. \item \verb|--enable-free-discrete-count-convergence|\\ Terminate PH based on the free discrete variable count convergence metric. Default is False. \item \verb|--free-discrete-count-threshold|=FREE\_DISCRETE\_COUNT\_THRESHOLD\\ The convergence threshold used in the criterion based on when the free discrete variable count convergence criterion. Default is 20. \item \verb|--enable-ww-extensions|\\ Enable the Watson-Woodruff PH extensions plugin. Default is False. \item \verb|--ww-extension-cfgfile|=WW\_EXTENSION\_CFGFILE\\ The name of a configuration file for the Watson-Woodruff PH extensions plugin. Default is wwph.cfg. \item \verb|--ww-extension-suffixfile|=WW\_EXTENSION\_SUFFIXFILE\\ The name of a variable suffix file for the Watson-Woodruff PH extensions plugin. Default is wwph.suffixes. \item \verb|--user-defined-extension|=EXTENSIONFILE. Here, "EXTENSIONFILE" is the module name, which is in either the current directory (most likely) or somewhere on your PYTHONPATH. A simple example is "testphextension" plugin that simply prints a message to the screen for each callback. The file testphextension.py can be found in the sources directory and is shown in Section~\ref{ExtensionDetailsSec}. A test of this would be to specify "-user-defined-extension=testphextension", assuming testphextension.py is in your PYTHONPATH or current directory. Note that both PH extensions (WW PH and your own) can co-exist; however, the WW plugin will be invoked first. \item \verb|--scenario-solver-options| The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the sub-problem solver. Whatever options specified are persistent across all solves. \item \verb|--ef-solver-options| The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the EF problem solver. \item \verb|--write-ef|\\            Upon termination, write the extensive form of the model - accounting for all fixed variables. \item \verb|--solve-ef|\\            Following write of the extensive form model, solve it. \item \verb|--ef-output-file|=EF\_OUTPUT\_FILE\\ The name of the extensive form output file (currently only LP format is supported), if writing of the extensive form is enabled. Default is efout.lp. \item \verb|--suppress-continuous-variable-output|\\ Eliminate PH-related output involving continuous variables. Default: no output. \item \verb|--keep-solver-files|\\   Retain temporary input and output files for scenario sub-problem solves.  Default: files not kept. \item \verb|--output-solver-logs|\\  Output solver logs during scenario sub-problem solves. Default: no output. \item \verb|--output-ef-solver-log|\\ Output solver log during the extensive form solve. Default: no output. \item \verb|--output-solver-results|\\ Output solutions obtained after each scenario sub-problem solve. Default: no output. \item \verb|--output-times|\\        Output timing statistics for various PH components. Default: no output. \item \verb|--disable-warmstarts|\\ Disable warm-start of scenario sub-problem solves in PH iterations >= 1. Default=False (i.e., warm starts are the default). \item \verb|--drop-proximal-terms|\\ Eliminate proximal terms (i.e., the quadratic penalty terms) from the weighted PH objective. Default=False (i.e., but default, the proximal terms are included). \item \verb|--retain-quadratic-binary-terms|\\ Do not linearize PH objective terms involving binary decision variables. Default=False (i.e., the proximal term for binary variables is linearized by default; this can have some impact on the relaxations during the branch and bound solution process). \item \verb|--linearize-nonbinary-penalty-terms|=BPTS\\ Approximate the PH quadratic term for non-binary variables with a piece-wise linear function. The argument BPTS gives the number of breakpoints in the linear approximation. The default=0. Reasonable non-zero values are usually in the range of 3 to 7. Note that if a breakpoint would be very close to a variable bound, then the break point is ommited. IMPORTANT: this option requires that all variables have bounds that are either established in the reference model or by code specfied using the bounds-cfgfile command line option. See Section~\ref{LinearSec} for more information about linearizing the proximal term. \item \verb|--breakpoint-strategy|=BREAKPOINT\_STRATEGY Specify the strategy to distribute breakpoints on the [lb, ub] interval of each variable when linearizing. 0 indicates uniform distribution. 1 indicates breakpoints at the node min and max, uniformly in- between. 2 indicates more aggressive concentration of breakpoints near the observed node min/max. \item \verb|--bounds-cfgfile|=BOUNDS\_CFGFILE\\ The argument BOUNDS\_CFGFILE specifies the name of an executable pyomo file that sets bounds. The devault is that there is no file. When specified, the code in this file is executed after the initialization of scenario data so the bounds can be based on data from all scenarios. The config subdirectory of the farmer example contains a simple example of such a file (boundsetter.cfg). \item \verb|--checkpoint-interval|\\ The number of iterations between writing of a checkpoint file. Default is 0, indicating never. \item \verb|--restore-from-checkpoint|\\ The name of the checkpoint file from which PH should be initialized. Default is not to restore from a checkpoint. \item \verb|--profile=PROFILE|\\     Enable profiling of Python code.  The value of this option is the number of functions that are summarized. The default is no profiling. \item \verb|--enable-gc|\\           Enable the python garbage collecter. The default is no garbage collection. \item \verb|-h|, \verb|--help|                                             \\ Show help message and exit. \item \verb|--verbose|                                                     \\ Generate verbose output for both initialization and execution. Default is False. \item \verb|--report-solutions|                                            \\ Always report PH solutions after each iteration. Enabled if --verbose is enabled. Default is False. \item \verb|--report-weights|                                              \\ Always report PH weights prior to each iteration. Enabled if --verbose is enabled. Default is False. \item \verb|--model-directory|=MODEL\_DIRECTORY                            \\ The directory in which all model (reference and scenario) definitions are stored. I.e., the .py'' files. Default is ".". \item \verb|--instance-directory|=INSTANCE\_DIRECTORY                      \\ The directory in which all instance (reference and scenario) definitions are stored. I.e., the .dat'' files. Default is ".". \item \verb|--solver|=SOLVER\_TYPE                                         \\ The type of solver used to solve scenario sub-problems. Default is cplex. \item \verb|--solver-manager|=SOLVER\_MANAGER\_TYPE                        \\ The type of solver manager used to coordinate scenario sub-problem solves. Default is serial. This option is changed in parallel applications as described in Section~\ref{parallelsec}. \item \verb|--max-iterations|=MAX\_ITERATIONS                              \\ The maximal number of PH iterations. Default is 100. \item \verb|--default-rho|=DEFAULT\_RHO                                    \\ The default (global) rho for all blended variables. Default is 1. \item \verb|--rho-cfgfile|=RHO\_CFGFILE                                    \\ The name of a configuration script to compute PH rho values. Default is None. \item \verb|--enable-termdiff|-convergence                                 \\ Terminate PH based on the termdiff convergence metric. The convergcne metric is the unscaled sum of differences between variable values and the mean. Default is True. \item \verb|--enable-normalized|-termdiff-convergence                      \\ Terminate PH based on the normalized termdiff convergence metric. Each term in the termdiff sum is normalized by the average value (NOTE: it is NOT normalized by the number of scenarios). Default is False. \item \verb|--termdiff-threshold|=TERMDIFF\_THRESHOLD                      \\ The convergence threshold used in the term-diff and normalized term-diff convergence criteria. Default is 0.01, which is too low for most problems. \item \verb|--enable-free-discrete-count-convergence|                      \\ Terminate PH based on the free discrete variable count convergence metric. Default is False. \item \verb|--free-discrete-count-threshold|=FREE\_DISCRETE\_COUNT\_THRESHOLD \\ The convergence threshold used in the criterion based on when the free discrete variable count convergence criterion. Default is 20. \item \verb|--enable-ww-extensions|                                                                                                                                          \\ Enable the Watson-Woodruff PH extensions plugin. Default is False. \item \verb|--ww-extension-cfgfile|=WW\_EXTENSION\_CFGFILE                 \\ The name of a configuration file for the Watson-Woodruff PH extensions plugin. Default is wwph.cfg. \item \verb|--ww-extension-suffixfile|=WW\_EXTENSION\_SUFFIXFILE           \\ The name of a variable suffix file for the Watson-Woodruff PH extensions plugin. Default is wwph.suffixes. \item \verb|--user-defined-extension|=EXTENSIONFILE                        \\ Here, "EXTENSIONFILE" is the module name, which is in either the current directory (most likely) or somewhere on your PYTHONPATH. A simple example is "testphextension" plugin that simply prints a message to the screen for each callback. The file testphextension.py can be found in the sources directory and is shown in Section~\ref{ExtensionDetailsSec}. A test of this would be to specify "-user-defined-extension=testphextension", assuming testphextension.py is in your PYTHONPATH or current directory. Note that both PH extensions (WW PH and your own) can co-exist; however, the WW plugin will be invoked first. \item \verb|--scenario-solver-options|                                     \\ The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the sub-problem solver. Whatever options specified are persistent across all solves. \item \verb|--ef-solver-options|                                           \\ The options are specified just as in pyomo, e.g., \verb|--scenario-solver-options="mip_tolerances_mipgap=0.2"| to set the mipgap for all scenario sub-problem solves to 20\% for the CPLEX solver. The options are specified in a quote deliminted string that is passed to the EF problem solver. \item \verb|--write-ef|                                                    \\ Upon termination, write the extensive form of the model - accounting for all fixed variables. \item \verb|--solve-ef|                                                    \\ Following write of the extensive form model, solve it. \item \verb|--ef-output-file|=EF\_OUTPUT\_FILE                             \\ The name of the extensive form output file (currently only LP format is supported), if writing of the extensive form is enabled. Default is efout.lp. \item \verb|--suppress-continuous-variable-output|                         \\ Eliminate PH-related output involving continuous variables. Default: no output. \item \verb|--keep-solver-files|                                           \\ Retain temporary input and output files for scenario sub-problem solves. Default: files not kept. \item \verb|--output-solver-logs|                                          \\ Output solver logs during scenario sub-problem solves. Default: no output. \item \verb|--output-ef-solver-log|                                        \\ Output solver log during the extensive form solve. Default: no output. \item \verb|--output-solver-results|                                       \\ Output solutions obtained after each scenario sub-problem solve. Default: no output. \item \verb|--output-times|                                                \\ Output timing statistics for various PH components. Default: no output. \item \verb|--disable-warmstarts|                                          \\ Disable warm-start of scenario sub-problem solves in PH iterations >= 1. Default=False (i.e., warm starts are the default). \item \verb|--drop-proximal-terms|                                         \\ Eliminate proximal terms (i.e., the quadratic penalty terms) from the weighted PH objective. Default=False (i.e., but default, the proximal terms are included). \item \verb|--retain-quadratic-binary-terms|                               \\ Do not linearize PH objective terms involving binary decision variables. Default=False (i.e., the proximal term for binary variables is linearized by default; this can have some impact on the relaxations during the branch and bound solution process). \item \verb|--linearize-nonbinary-penalty-terms|=BPTS                      \\ Approximate the PH quadratic term for non-binary variables with a piece-wise linear function. The argument BPTS gives the number of breakpoints in the linear approximation. The default=0. Reasonable non-zero values are usually in the range of 3 to 7. Note that if a breakpoint would be very close to a variable bound, then the break point is ommited. IMPORTANT: this option requires that all variables have bounds that are either established in the reference model or by code specfied using the bounds-cfgfile command line option. See Section~\ref{LinearSec} for more information about linearizing the proximal term. \item \verb|--breakpoint-strategy|=BREAKPOINT\_STRATEGY                    \\ Specify the strategy to distribute breakpoints on the [lb, ub] interval of each variable when linearizing. 0 indicates uniform distribution. 1 indicates breakpoints at the node min and max, uniformly in- between. 2 indicates more aggressive concentration of breakpoints near the observed node min/max. \item \verb|--bounds-cfgfile|=BOUNDS\_CFGFILE                              \\ The argument BOUNDS\_CFGFILE specifies the name of an executable pyomo file that sets bounds. The devault is that there is no file. When specified, the code in this file is executed after the initialization of scenario data so the bounds can be based on data from all scenarios. The config subdirectory of the farmer example contains a simple example of such a file (boundsetter.cfg). \item \verb|--checkpoint-interval|                                         \\ The number of iterations between writing of a checkpoint file. Default is 0, indicating never. \item \verb|--restore-from-checkpoint|                                     \\ The name of the checkpoint file from which PH should be initialized. Default is not to restore from a checkpoint. \item \verb|--profile=PROFILE|                                             \\ Enable profiling of Python code. The value of this option is the number of functions that are summarized. The default is no profiling. \item \verb|--enable-gc|                                                   \\ Enable the python garbage collecter. The default is no garbage collection. \end{itemize} \section{Extensions via Callbacks \label{CallbackSec}} Basic PH can converge slowly, so it is usually advisable to extend it or modify it. In pysp, this is done via the pyomo plug-in mechanism. The basic PH implementation provides callbacks that enable access to the data structures used by the algorithm. In \S\ref{WWExtensionSec} we describe extensions that are provided with the release. In \S\ref{ExtensionDetailsSec}, we provide information to power users who may wish to modify or replace the extensions. Basic PH can converge slowly, so it is usually advisable to extend it or modify it. In pysp, this is done via the pyomo plug-in mechanism. The basic PH implementation provides callbacks that enable access to the data structures used by the algorithm. In \S\ref{WWExtensionSec} we describe extensions that are provided with the release. In \S\ref{ExtensionDetailsSec}, we provide information to power users who may wish to modify or replace the extensions. \subsection{Watson and Woodruff Extensions \label{WWExtensionSec}} Watson and Woodruff describe innovations for accelerating PH \cite{phinnovate}, most of which are generalized and implemented in the file \verb|wwextension.py|, but users generally do not need to know this file name. To invoke the program Watson and Woodruff describe innovations for accelerating PH \cite{phinnovate}, most of which are generalized and implemented in the file \verb|wwextension.py|, but users generally do not need to know this file name. To invoke the program with these additional features, invoke the software with a command of the form: \begin{verbatim} runph --enable-ww-extensions \end{verbatim} Many of the examples described in \S\ref{ExampleSec} use this plug-in. The main concept is that some integer variables should be fixed as the algorithm progresses for two reasons: Many of the examples described in \S\ref{ExampleSec} use this plug-in. The main concept is that some integer variables should be fixed as the algorithm progresses for two reasons: \begin{itemize} \item Convergence detection: A detailed analysis of PH algorithm behavior on a variety of problem indicates that individual decision variables frequently converge to specific, fixed values scenarios in early PH iterations. Further, despite interactions among the the variables, the value frequently does not change in subsequent PH iterations. Such variable fixing'' behaviors lead to a potentially powerful, albeit obvious, heuristic: once a particular variable has been the same in all scenarios for some number of iterations, fix it to that value. For problems where the constraints effectively limit $x$ from both sides, these methods may result in PH encountering infeasible scenario sub-problems even though the problem is ultimately feasible. \item Cycle detection: When there are integer variables, cycling is sometimes encountered, consequently, cycle detection and avoidance mechanisms are required to force eventual convergence of the PH algorithm in the mixed-integer case. To detect cycles, we focus on repeated occurrences of the weights, implemented using a simple hashing scheme \cite{tabuhash} to minimize impact on run-time. Once a cycle in the weight vectors associated with any decision variable is detected, the value of that variable is fixed. \item Convergence detection: A detailed analysis of PH algorithm behavior on a variety of problem indicates that individual decision variables frequently converge to specific, fixed values scenarios in early PH iterations. Further, despite interactions among the the variables, the value frequently does not change in subsequent PH iterations. Such variable fixing'' behaviors lead to a potentially powerful, albeit obvious, heuristic: once a particular variable has been the same in all scenarios for some number of iterations, fix it to that value. For problems where the constraints effectively limit $x$ from both sides, these methods may result in PH encountering infeasible scenario sub-problems even though the problem is ultimately feasible. \item Cycle detection: When there are integer variables, cycling is sometimes encountered, consequently, cycle detection and avoidance mechanisms are required to force eventual convergence of the PH algorithm in the mixed-integer case. To detect cycles, we focus on repeated occurrences of the weights, implemented using a simple hashing scheme \cite{tabuhash} to minimize impact on run-time. Once a cycle in the weight vectors associated with any decision variable is detected, the value of that variable is fixed. \end{itemize} Fixing variables aggressively can result in shorter solution times, but can also result in solutions that are not as good. Furthermore, for some problems, aggressive fixing can result in infeasible sub-problems even though the problem is ultimately feasible. Many of the parameters discussed in the next subsections control fixing of variables. This is discussed in a tutorial in section~\ref{WWTutorialSec}. Fixing variables aggressively can result in shorter solution times, but can also result in solutions that are not as good. Furthermore, for some problems, aggressive fixing can result in infeasible sub-problems even though the problem is ultimately feasible. Many of the parameters discussed in the next subsections control fixing of variables. This is discussed in a tutorial in section~\ref{WWTutorialSec}. \subsubsection{Variable Specific Parameters} The plug-in makes use of parameters to control behavior at the variable level. Global defaults (to override the defaults stated here) should be set using methods described in \S\ref{ParmSec}. Values for each variable should be set using methods described in \S\ref{SuffixSec}. Note that for variable fixing based on convergence detection, iteration zero is treated separately. The parameters are as follows: The plug-in makes use of parameters to control behavior at the variable level. Global defaults (to override the defaults stated here) should be set using methods described in \S\ref{ParmSec}. Values for each variable should be set using methods described in \S\ref{SuffixSec}. Note that for variable fixing based on convergence detection, iteration zero is treated separately. The parameters are as follows: \begin{itemize} \item fix\_continuous\_variables: True or False. If true, fixing applies to all variables. If false, then fixing applies only to discrete variables. \item Iter0FixIfConvergedAtLB: 1 (True) or 0 (False). If 1, then discrete variables that are at their lower bound in all scenarios after the iteration zero solves will be fixed at that bound. \item Iter0FixIfConvergedAtUB: 1 (True) or 0 (False). If 1, then discrete variables that are at their upper bound in all sce& cooprns.log|. The \verb|runph| command is a normal runph command with the usual arguments with the additional specification that subproblem solves should be directed to the pyro solver manager. Note that the command \verb|coopr-ns| and the argument \verb|solver-manger| have a dash in the middle, while the commands \verb|dispatch_srvr| and \verb|pyro_mip_server| have underscores. The first three commands launch processes that have no internal mechanism for termination; i.e., they will be terminated only if they crash or if they are killed by an external process. It is common to launch these processes with output redirection, such as \verb|coopr-ns >& cooprns.log|. The \verb|runph| command is a normal runph command with the usual arguments with the additional specification that subproblem solves should be directed to the pyro solver manager.
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/asyncphdriver.py

 r1646 # for profiling import cProfile try: import cProfile as profile except ImportError: import profile import pstats traceback.print_exc() else: cProfile.run('run_ph()','profile.stats') profile.run('run_ph()','profile.stats') p=pstats.Stats('profile.stats') p.sort_stats('time')
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/farmer_lp.dat

 r1739 param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ; param MeanYield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ; param Yield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/farmer_lp.py

 r2734 model.PlantingCostPerAcre = Param(model.CROPS, within=PositiveReals) model.MeanYield = Param(model.CROPS, within=NonNegativeReals) model.Yield = Param(model.CROPS, within=NonNegativeReals) # def cattle_feed_rule(i, model): return model.CattleFeedRequirement[i] <= (model.MeanYield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i] - model.QuantitySubQuotaSold[i] - model.QuantitySuperQuotaSold[i] return model.CattleFeedRequirement[i] <= (model.Yield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i] - model.QuantitySubQuotaSold[i] - model.QuantitySuperQuotaSold[i] model.EnforceCattleFeedRequirement = Constraint(model.CROPS, rule=cattle_feed_rule) def limit_amount_sold_rule(i, model): return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i] - (model.MeanYield[i] * model.DevotedAcreage[i]) <= 0.0 return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i] - (model.Yield[i] * model.DevotedAcreage[i]) <= 0.0 model.LimitAmountSold = Constraint(model.CROPS, rule=limit_amount_sold_rule)
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/maxmodels/ReferenceModel.py

 r2734 model.PlantingCostPerAcre = Param(model.CROPS, within=PositiveReals) model.MeanYield = Param(model.CROPS, within=NonNegativeReals) model.Yield = Param(model.CROPS, within=NonNegativeReals) # def cattle_feed_rule(i, model): return model.CattleFeedRequirement[i] <= (model.MeanYield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i] - model.QuantitySubQuotaSold[i] - model.QuantitySuperQuotaSold[i] return model.CattleFeedRequirement[i] <= (model.Yield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i] - model.QuantitySubQuotaSold[i] - model.QuantitySuperQuotaSold[i] model.EnforceCattleFeedRequirement = Constraint(model.CROPS, rule=cattle_feed_rule) def limit_amount_sold_rule(i, model): return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i] - (model.MeanYield[i] * model.DevotedAcreage[i]) <= 0.0 return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i] - (model.Yield[i] * model.DevotedAcreage[i]) <= 0.0 model.LimitAmountSold = Constraint(model.CROPS, rule=limit_amount_sold_rule)
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/models/ReferenceModel.py

 r2734 model.PlantingCostPerAcre = Param(model.CROPS, within=PositiveReals) model.MeanYield = Param(model.CROPS, within=NonNegativeReals) model.Yield = Param(model.CROPS, within=NonNegativeReals) # def cattle_feed_rule(i, model): return model.CattleFeedRequirement[i] <= (model.MeanYield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i] - model.QuantitySubQuotaSold[i] - model.QuantitySuperQuotaSold[i] return model.CattleFeedRequirement[i] <= (model.Yield[i] * model.DevotedAcreage[i]) + model.QuantityPurchased[i] - model.QuantitySubQuotaSold[i] - model.QuantitySuperQuotaSold[i] model.EnforceCattleFeedRequirement = Constraint(model.CROPS, rule=cattle_feed_rule) def limit_amount_sold_rule(i, model): return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i] - (model.MeanYield[i] * model.DevotedAcreage[i]) <= 0.0 return model.QuantitySubQuotaSold[i] + model.QuantitySuperQuotaSold[i] - (model.Yield[i] * model.DevotedAcreage[i]) <= 0.0 model.LimitAmountSold = Constraint(model.CROPS, rule=limit_amount_sold_rule)
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/nodedata/AboveAverageNode.dat

 r1446 # above mean scenario param MeanYield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ; param Yield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/nodedata/AverageNode.dat

 r1446 # "mean" scenario param MeanYield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ; param Yield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/nodedata/BelowAverageNode.dat

 r1446 # below-mean scenario param MeanYield := WHEAT 2.0 CORN 2.4 SUGAR_BEETS 16 ; param Yield := WHEAT 2.0 CORN 2.4 SUGAR_BEETS 16 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/nodedata/ReferenceModel.dat

 r1446 param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ; param MeanYield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ; param Yield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/scenariodata/AboveAverageScenario.dat

 r1446 param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ; param MeanYield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ; param Yield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/scenariodata/AverageScenario.dat

 r1446 param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ; param MeanYield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ; param Yield := WHEAT 2.5 CORN 3 SUGAR_BEETS 20 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/scenariodata/BelowAverageScenario.dat

 r1446 param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ; param MeanYield := WHEAT 2.0 CORN 2.4 SUGAR_BEETS 16 ; param Yield := WHEAT 2.0 CORN 2.4 SUGAR_BEETS 16 ;
• ## coopr.pysp/stable/2.4/examples/pysp/farmer/scenariodata/ReferenceModel.dat

 r1446 param PlantingCostPerAcre := WHEAT 150 CORN 230 SUGAR_BEETS 260 ; param MeanYield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ; param Yield := WHEAT 3.0 CORN 3.6 SUGAR_BEETS 24 ;
• ## coopr.pysp/stable/2.4/examples/pysp/sizes/SIZES10/phdriver.py

 r1706 # for profiling import cProfile try: import cProfile as profile except ImportError: import profile import pstats traceback.print_exc() else: cProfile.run('run_ph()','profile.stats') profile.run('run_ph()','profile.stats') p=pstats.Stats('profile.stats') p.sort_stats('time')