back start next


[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [ 17 ] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51]


17

"E

1 ll

-1

•S a; a

lllll

ov-.io-CC/CrnOOOOOOOOOOCOOO

0 r- "v nc oo p -

oo i/ >v

oo i/ -

OO nrr OC< Oo6 Drr--f• OO -op -1 <N <N < -

O" rn" m-C OO rn (

OoonorOr-r-0/<~4

- OO

oo OO

OO OO

\0 ON

> OO CJ On On On

OO OO OO OO OO OO OO I

r* r* rS

r--" 40

ON t-- ON NO - > OO

>JD OO ON - - -

0 ON ON p p p

r*-i -d

. ON ON

I OO OO

tl-

i

th 4

ON ON ON

tr\ ty-i

On On ON

OO OO OO

r*S r<-) r<-i

>n ON

, . , . (N (N

cJ

On OS OO OO

rr Q

> ON , r-

>

: W

r-1

rr{ rn"

0 OO OO

01 r-i

O" On" OO CC

t-- CC

\0 OO w~»

oo m r*-i

!/-> OO

rl- ir >/"> LTl

r r]

On r-f ON

-1 r<l 1/1 ON

- - (N NO

ON p p -

rn fn

OO r-j > i-

t-- OO OO ir /-! ir> »n r* cn

od o" r--" NO

m m

in ON r- On

NO OO

On ON ON On

1/1 "/-> >n v-»

n-i r*i

" On" -" ON

- -H (N

r<-i f*-i

:> NO

r*i r*S

ON -i

m m m f<i

On OS -<

OO

rn r*~) r<;

r*

- - -Osuor4rOOTl--HOQO

oo«(tON -

OOOrrotl-tf---

OOOOOOOOOOOO OOOOOOOOOOOQot-J

ooooooooooooppppp

tl- -[ -[ -rt

f-i «-i »/S rO OO On On OnOnOnOnOnOnOnOOnOnOnOnOn

rnOOOOOSONONONONONONONONONOSONONONOONONON

OO ON ON ON

On On On On On On On

-- rn - tf

EXERCISES

EXERCISES

4.1 Show that w,„\x,i - x, positive constant.

is a convex function of x,t and x,, where is a

4.2 Using the observation that \x - a\ = ( - - x)], find a linear programming formulation for the minimization of WMt(Xa,:.,x,M) that is different from that of problem (4.5) in the text. (Hint: Consider the construction: minimize d, subject lo d x - a and d a - x.)

4.3 Verify that if all weights are positive (all variables are present), the number of variables in problem (4.5) in the text is + 2mn and the number of constraints is mn + (w - m)/2. (Recall that is fixed.)

4.4 Find the dual of problem (4.5) in the text. Show that the majority of the constraints in the dual are of the bounded-variable type, ie, variables are bounded from below and above by constants. (Bounded-variable constraints are handled implicitly and thus quite efficiently by some linear programming computer codes.)

4.5 Consider the data given in Table 4.2.

a. Use linear programming to minimize /,( 2, 22).

b. Use either the edge descent method or the Juel-Love algorithm to minimize HM,(x,,,X2,).

Figure 4E.1 Locations of Existing Machines.

New machine #3 to be in this area



W2ir

4.6 Three new machines are to be optimally located among live existing ones. Figure 4E. 1 gives the relative positions of the existing machines and of the constraint on the position of new machine #3. Table 4E. 1 gives the relevant weights. Assume rectangular distances.

a. Use linear programming to find optimal locations for the three new machines.

Use the iterative scheme given by procedure (4.6) to locate the three new machines {ignoring the constraint on the location of new machine #3). Suppose the only restriction on the placement of new machine #3 is that it not be located inside the rectangle whose corners are at the points (1,1), (1,8), (7,8), and (7,1). Describe a method for solving this problem. Assume that there is no constraint on the location of new machine #3 and that the new machine interaction weights are: vfa,, = 8, ivjj = 10, Wj,, = 7. Find the optimum coordinates x*,„ x*2, and , using either the edge descent method or the Juel-Love algorithm.

REFERENCE NOTES

SECTION 4.1 An eariy treatment of the Euclidean distance multi-facility problem was given by Miehle (1958) who rediscovered the Weiszfeld iterative procedure. Francis (1964) introduced the rectangular distance version of the multi-facility problem. Werson, Quon, and Charnes (1962) and Wesolowsky and Love (1971b) applied the linear programming approach for absolute value minimization to the rectangular distance facility location context. Cabot, Francis, and Stary (1970) developed a network flow algorithm for the dual problem. Discussion of the edge descent method is from Wesolowsky (1970). The method is related to one due to Pritsker and Ghare (1970), subsequently studied by Rao (1973); the difference lies in the fact that edge descent optimizes along an edge instead of taking the next lowest grid point. The Juel-Love algorithm is further detailed in Juel and Love (1976).

SECTION 4.2 The chaining assumption for well-structured problems follows Francis and Cabot (1972). Using a differentiable hyperboloid approximation procedure (HAP), Eyster, White, and Wierwille (1973) extended the Weiszfeld iterative procedure to multi-facility problems with rectangular and Euclidean distances. Modifications to accelerate HAP were suggested by Ostresh (1977) and Charalambous (1985). The distance extension was given by Morris and Verdini

(1979); Morris (1981) discussed convergence. Nonlinear programming approaches were taken in Love (1967a, 1969), where constraints were prominent and the concept of a differentiable approximation method was introduced; in Wesolowsky and Love (1972), wherein a different approximating function was used; and in Love and Morris (1975b). The use of a stopping criterion involving a lower bound was described by Love and Yeong (1981); a generalized bound for the multi-facility jP,, distance case was given by Love and Dowling (1986). Calamai and Conn (1987) presented a second-order algorithm that does not modify the objective function to remove discontinuities in the derivatives.

APPENDIX-CHAPTER 4 Mathematical Notes

4.1 Prove Property 4.1

The function .?„(X„X) is convex, where X, and X, are two variable points.

proof: Let {xn,x,,,Xa,x,2) = ( „ 2, , ,) Y, then UX„Xr) = AY) = (\y> - 2\" + [ - yJi")"". We have

f{\Y + {l-X)Y") = [(Xj;; + (1- )/;) -iXy2 + {l-X)y\ + \{Xy, + {l-X)yi) -

iXy, + (i-x)yi)\V" < [\4y[ - yOh +

- ydTV" + [(i-A)(/; - yDf +

1(1 -X){y; - yl)\Vi> (Minkowski inequality)

= Xf{Y) + (1 - )/( "), since 0 < < 1, which completes the proof.

4.2 Prove Woperty 4.2

The derivatives of P,„X;) may be undefined anywhere on the two-dimensional plane.

proof: Following note 4.1, let SjiX„X) be given by: AY) = [\y, - 2\ + h - yiV"-

A direction in R* is determined by the vector = ( „ 2, , 4), where M = 1- The first directional derivative at point is given by DJ{Y). The first derivatives of/( Y) do not exist at any point in the set Z =

Table 4E.1 Weights for the Three-Machine Location Problem.



MULTI-FACILITY LOCATION 2, = 4!- If DJ{Y) existed for YeZ, it would be given J{Y+\n) -RY)

DJ(Y) = lim-

= lim

-, since YtZ

[[At, - 2 + I/U3 - M4p]", X > 0,

- - + 3 - /4P] X < 0,

and so the limit does not exist. Since the points X, and X are taken anywhere on the two-dimensional plane, the derivatives of 2 { , ) are undefined anywhere on the two-dimensional plane where new facilities / and r coincide.

Chapter

Duality

This chapter is concerned with the development of dual mathematical programs for the single- and multi-facility location models of Chapters 2 and 4.* As opposed to the case of linear programming, dual programs for general nonlinear programming models as developed by Wolfe (1961) and described by Balinski and Baumol (1968) include primal variables. The dual problems described here are formulated entirely from the data of the primal location problem, and they are "pure" in the sense that they contain only dual variables (the models of Chapters 2 and 4 will henceforth be referred to as primal). The solution of the primal problem is a by-product of having solved the dual, and vice versa. Dual variables will be shown to have physical interpretations as direction vectors that point to optimal facility locations in the primal problem.

5.1 THE SINGLE-FACILITY EUCLIDEAN DUAL The primal problem we will consider first is:

minimize W{X) = vwi(jc,-a„) + {x-apfY X >=

(5.1)

We will use a strictly convex approximating function to ehminate the problem of discontinuities of the derivatives of W{X). The dual of the approximating problem will then be formulated and the dual of problem (5.1) is then developed as a hmiting case. The smooth approximation to problem (5.1) employed here is given by:

*Tliis chapter assumes the reader has a background in nonlinear programming.



[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [ 17 ] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51]