# Changeset 960

Ignore:
Timestamp:
Jan 13, 2011 10:17:30 PM (11 years ago)
Message:

(nonworking) Code labelled MOVE_SINGLETON now a separate routine
singletonRows. Code that strengthened coefficients now a separate routine
strengthenCoeff. Nontrivial code motion in the code blocks that finish
off down feasible and down feasible, up feasible.

File:
1 edited

Unmodified
Removed
• ## branches/CglWorking101215/src/CglProbing/CglProbingProbe.cpp

 r957 probeDir should be coded using probeDown, probeUp. */ void disaggCuts (int nstackC, int probeDir, void disaggCuts (int nstackC, unsigned int probeDir, double primalTolerance_, double disaggEffectiveness, const OsiSolverInterface &si, return (feasible) ; } /* This method examines the rows on stackR looking for redundant rows that consist solely of singleton variables (i.e., variables that appear in just this constraint). It then looks to see if any of the variables in the row can be fixed at bound. Consider the case where U < b and all unfixed variables in the row are singletons (occur in no other constraint). Given x is a singleton, the reduced cost is c - ya = c - ye = c - y.  But if U < b, the constraint is loose by definition, hence y = 0 and the reduced cost is c. If a > 0, x should be u in U. If c < 0, minimising will tend to drive x to u. This cannot violate constraint i (U < b) and (since x is a singleton) will have no effect elsewhere. Hence we can fix x at u. Do the case analysis, and what you find is that against U we want a > 0  ==>  c < 0  ==>  push x to u a < 0  ==>  c > 0  ==>  push x to l and against L we want a > 0  ==>  c > 0  ==>  push x to l a < 0  ==>  c < 0  ==>  push x to u Extend it one more time by taking the objective direction (max/min) into account (dir = 1.0 for min, -1.0 for max) and you have against U ==> a*c*dir < 0 against L ==> a*c*dir > 0 John's original comment for this code was // also see if singletons can go to good objective // Taken out as should be found elsewhere // and has to be original column length but he reinstated it. Apparently there were cases where fixing the probing variable was required to satisfy the condition that all unfixed variables be singletons. Enough of them to justify reinstating this code. */ void singletonRows (int jProbe, double primalTolerance_, const OsiSolverInterface &si, const CoinPackedMatrix *rowCopy, int *const markC, int &nstackC, int *const stackC, double *const saveL, double *const saveU, double *const colUpper, double *const colLower, double *const colGap, const int nstackR, const int *const stackR, const double *const rowUpper, const double *const rowLower, const double *const maxR, const double *const minR) { /* Unpack a few vectors from the row-major matrix. */ const CoinBigIndex *rowStart = rowCopy->getVectorStarts() ; const int *column = rowCopy->getIndices() ; const double *rowElements = rowCopy->getElements() ; const int nCols = rowCopy->getNumCols() ; /* `Singleton' must be based on the column length in the original system. */ const double *objective = si.getObjCoefficients() ; const int *columnLengths = si.getMatrixByCol()->getVectorLengths() ; const double objSense = si.getObjSense() ; /* Open a loop to work through the rows on stackR. */ for (int istackR = 0 ; istackR < nstackR ; istackR++) { int i = stackR[istackR] ; /* Check the gaps. If the constraint is potentially tight in both directions, there's nothing more to do here. */ const double uGap = rowUpper[i]-maxR[i] ; const double lGap = minR[i]-rowLower[i] ; if (uGap < primalTolerance_ && lGap < primalTolerance_) continue ; /* Scan the row to see if it meets the `all singletons' condition. Again, if this fails, there's nothing more to be done. Note that the original code didn't check the probing variable x

, because if you're probing a binary variable it's fixed. But for general integer variables a probe does not in general fix the variable.  So we check all variables. We should not be executing this method if we have prima facie infeasibility. */ bool canFix = true ; for (int kk = rowStart[i] ; kk < rowStart[i+1] ; kk++) { int j = column[kk] ; assert(colUpper[j]-colLower[j] > -primalTolerance_) ; if (colUpper[j] > colLower[j] && columnLengths[j] != 1) { canFix = false ; break ; } } if (!canFix) continue ; /* If we've passed the tests, we can look for variables suitable to drive to bound. Work against U first. We're looking for variables with a > 0 that will be naturally driven to u, or variables with a < 0 that will be naturally driven to l. Don't overwrite the saved bounds if we've tightened this variable already! */ if (uGap > primalTolerance_) { for (int kk = rowStart[i] ; kk < rowStart[i+1] ; kk++) { int j = column[kk] ; const double lj = colLower[j] ; const double uj = colUpper[j] ; if (uj > lj) { double value = rowElements[kk] ; if (objSense*objective[j]*value < 0.0) { assert(jProbe != j) ; if (!(markC[j]&(tightenLower|tightenUpper))) { stackC[nstackC] = j ; saveL[nstackC] = lj ; saveU[nstackC] = uj ; } assert(saveU[nstackC] > saveL[nstackC]) ; if (value > 0.0) { colLower[j] = uj ; } else { colUpper[j] = lj ; } markC[j] |= tightenLower|tightenUpper ; colGap[j] = -primalTolerance_ ; assert(nstackC < nCols) ; nstackC++ ; } } } } /* And now the identical code, except that we're working against L, hence the sense of the elibigility test is reversed and we want variables with a > 0 that will be naturally driven to l, or variables with a < 0 that will be naturally driven to u. */ if (lGap > primalTolerance_) { for (int kk = rowStart[i] ; kk < rowStart[i+1] ; kk++) { int j = column[kk] ; const double lj = colLower[j] ; const double uj = colUpper[j] ; if (uj > lj) { double value = rowElements[kk] ; if (objSense*objective[j]*value > 0.0) { assert(jProbe != j) ; if (!(markC[j]&(tightenLower|tightenUpper))) { stackC[nstackC] = j ; saveL[nstackC] = lj ; saveU[nstackC] = uj ; } assert(saveU[nstackC] > saveL[nstackC]) ; if (value < 0.0) { colLower[j] = uj ; } else { colUpper[j] = lj ; } markC[j] |= tightenLower|tightenUpper ; colGap[j] = -primalTolerance_ ; assert(nstackC < nCols) ; nstackC++ ; } } } } } return ; } /* Generate coefficient strengthening cuts. Assume we have a binary probe variable x

with a > 0. This will be a down probe, so assume that U > b before the probe, but now U' < b (i.e., the constraint is redundant against b after the down probe forces x

to 0 and reduces U to U'). We would like to have the following: * When x

= 0, b' = U'  (after all, the lhs can't be any larger) * When x

= 1, b' = b   (the original value) Define delta = b - U'. Let b' = b-delta and let a' = a-delta. Then when x

= 1, the delta on each side cancels and we're left with the original constraint. When x

= 0, the rhs becomes b' = b+delta = U'. The usual case analysis applies; see the typeset documentation. It's not necessarily true that U' = U - au

; additional propagation could have reduced it even further. That doesn't alter the reasoning above. TODO: Figure out why goingToTrueBound is relevant here, other than it's set to zero if we're infeasible.  -- lh, 101127 -- I think that might be all there is to it.   -- lh, 101128 -- Well, maybe not. Check the math for the contained cuts to see if it's valid only for binary variables; when within distance 1 of the upper or lower bound of a general integer; or more generally. GoingToTrueBound indicates the first of these two. -- lh, 110105 -- After working the math a few times, it looks to me like the important aspect is that goingToTrueBound = 2 indicates binary variables. I can't get the math to agree with the code otherwise. See the problem noted below. I'm going to restrict this method to binary variables until I can resolve questions about general integers.  -- lh, 110113 -- */ #define STRENGTHEN_PRINT void strengthenCoeff ( int jProbe, double primalTolerance_, bool justReplace, bool canReplace, double needEffectiveness, const OsiSolverInterface &si, CglProbingRowCut &rowCut, const CoinPackedMatrix *rowCopy, double *const colUpper, double *const colLower, const double *const colsol, const int nstackR, const int *const stackR, const double *const rowUpper, const double *const rowLower, const double *const maxR, const double *const minR, const int *const realRows, double *const element, int *const index, CglTreeInfo *const info ) { # if CGL_DEBUG > 0 std::cout << "Entering strengthenCoeff, probed variable " << si.getColName(jProbe) << "(" << jProbe << ")." << std::endl ; const OsiRowCutDebugger *debugger = si.getRowCutDebugger() ; # endif /* Unpack a few vectors from the row-major matrix. */ const CoinBigIndex *rowStart = rowCopy->getVectorStarts() ; const int *column = rowCopy->getIndices() ; const double *rowElements = rowCopy->getElements() ; /* Open up an outer loop to walk stackR and look for interesting constraints. */ for (int istackR = 0 ; istackR < nstackR ; istackR++) { int irow = stackR[istackR] ; /* We can't get anywhere unless probing has made one end or the other of the constraint redundant. Really, we shouldn't be here if b or blow are not finite. Check with an assert. */ double uGap = rowUpper[irow]-maxR[irow] ; assert(uGap < 1.0e8) ; double lGap = minR[irow]-rowLower[irow] ; assert(lGap < 1.0e8) ; if (uGap < primalTolerance_ && lGap < primalTolerance_) continue ; /* We'll need the lhs activity to evaluate the effectiveness of the cut. Do a base calculation which we'll correct later. TODO: Sort out why anyColumnCuts is an obstacle. Theoretical or practical? Could we use the new bound instead?  -- lh, 101128 -- After working the math, looks to me like the only effect will be where a column cut cuts off the current solution, in which case the lhs value (sum) may be incorrect, affecting the calculation of effectiveness. We could correct for this, if it was worth the effort (a max or min when calculating the sum below). -- lh, 110113 -- */ double sum = 0.0 ; for (int kk = rowStart[irow] ; kk < rowStart[irow+1] ; kk++) { sum += rowElements[kk]*colsol[column[kk]] ; } /* Start out by working against the row upper bound U(i).  We've calculated sum using the current a, so reduce it by (-a+a')x*

= (-uGap)(x*

) to do the check. We're not willing to generate a cut if it doesn't cut off the current solution, but we are willing to strengthen the existing constraint in place. Can't hurt, eh? Excluding range constraints in this case avoids the issue of conflicting a' if it turns out we can strengthen against b and blow in the same constraint. */ if (uGap > primalTolerance_ && (sum-uGap*colsol[jProbe] > maxR[irow]+primalTolerance_ || (info->strengthenRow && rowLower[irow] < -1.0e20))) { /* Generate the coefficients. For all except the probing variable, we just copy the coefficient. The probing variable becomes a' = (a - uGap). Allow for the possibility that a starts or ends as 0. TODO: The value of sum2 calculated here should be precisely sum-uGap*colsol[j], unless my math is way wrong.  -- lh, 110107 -- */ int n = 0 ; bool saw_aip = false ; double sum2 = 0.0 ; for (int kk = rowStart[irow] ; kk < rowStart[irow+1] ; kk++) { int kColumn = column[kk] ; double el = rowElements[kk] ; if (kColumn != jProbe) { index[n] = kColumn ; element[n++] = el ; } else { el = el-uGap ; if (fabs(el) > 1.0e-12) { index[n] = kColumn ; element[n++] = el ; } saw_aip = true ; } sum2 += colsol[kColumn]*el ; } if (!saw_aip) { index[n] = jProbe ; element[n++] = -uGap ; sum2 -= colsol[jProbe]*uGap ; } assert(sum == sum2) ; /* Check effectiveness. Add the cut only if it's sufficiently effective. (If we're strengthening existing constraints, needEffectiveness is quite low.) TODO: (*) I can't justify colLower in uGap*(l

+1.0). When I work the math, b' = b - (b - U') = b - uGap. I'll keep trying, on the theory that it's some attempt at general integers, but I'm thinking it's wrong. Note that for binary variables, l