back start next
[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [ 22 ] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51]
22 from Uj to the hospital and let Wj convert distance into expected minutes of travel time to the /1 zone. Then equation (6.1) would be amended to: M{X) = max{WjUX,a) + gjj = !,...,«). (6.1) The interactive graphics algorithm can be used upon defining the circular regions as: where C/xo) = [X\UX,a,) ,( )\, Cjixo) = max[0, (Xog])IWjl MultiFacility Problems We now describe a convenient iterative approach which complements the Weiszfeld method for minimum sum location problems. The approach produces successively improved approximations to a solution to the KarushKuhnTucker conditions that are sufficient for this problem. These conditions are stated in Property 6.2. (When derivatives are taken, it will be important to recall the convention that whenever / > r, W2,> is evaluated as Wj.) Property 6.2 [Necessary and sufficient conditions for optimality using Euclidean distances]. The KarushKuhnTucfcer necessary (and sufficient) conditions for (zf.X*) to solve problem (6.3 b) with p = 2 are given by the constraints of problem (6.3b) together with: E V* wlj (xf,  aj,) + 2 "* wl,r (4i  4*) = 0 / 1 r 1 i = \,...,m\ = 1,2 (6.5) m n ml m J.J.t+J.J.u*=\ (6.6) i\ /=1 /=1 r=i+\ v1{z*,wimx*,am = 0 i = l,...,m;j = l,...,n (6.7) u*iz*owUUX?,X*)Y) = 0 i = l,...,wl; r = i+l,...,m (6.8) and all V* > 0, * » 0. (6.9) The vJ and u* are the Lagrange multipliers (dual variables). Before describing the algorithm, we define a particular minimum sum problem. For given values of v,j and w,, (indicated by overbars), we will need to: minimize ; , ) = £ E vv„y[(x„  a,,) + (  a)] (6.10) "i  1 rn + Z X 2diXn  Xr,y + (X,2  XriYl where vP,y = vjWlj, W2,r = u,Mir Problem (6.10) is a squaredEuclidean distance problem that has an analytical solution. This is discussed in the Appendix, mathematical note 6.1. Property 6.3 [Lagrange multipliers and primal variables]. Ifv = v* and = u*, then X* [which solves problem (6.3b) with p = 2] is a global minimizer of W(X,u*,v*). proof: lV{X;U,v) is a convex differentiable function of X. Hence, setting partial derivatives to zero and solving the system of linear equations will produce a global minimizer of W, ie, solving: = Z vjw],xx,%) + Z utwl,ix,i,x,) = 0 ik j= 1 / = \,...,m; = 1,2 (6.10) wdll produce a global minimizer for W. But these equations are the same as those in expression (6.5), which X* must satisfy; this means X* minimizes W(X;u*,v*), as claimed. If u* and V* were known in advance, we could solve the problem analytically by solving problem (6.10). This leads to an iterative procedure for estimating * and v*. Let X be a minimizer of W{X\uS\ v<>), where <> and v<" are the t* estimates of u* and v*. Define: vij" = vip w„/2(Y5),ay)a and ; » = W2MX;\X)ls, (6.11) (6.12) where: = Z Z ffWui4Xp,a;j+ Z Z u4)w2,MX\Xy By construction, lAji > 0, mi > 0 and m n mi m Z Z i" + Z Z = I 11 ji i\ /+1 (6.13)
Because X<" minimizes W, the partial derivatives of ; \ v<") must be zero at XK Using equations (6.11) and (6.12) to eliminate v<j and u\,\ we obtain: 11 n  ) for i = l,...,m; = 1,2. Now suppose vi;+> = v* and mS; = it*. Then by the complementary slackness conditions (6.7) and (6.8) x*o = WuAiXfA) = W2,MXT,Xf) for v* 0, * 0. Hence the denominators in equation (6.14) could be replaced by 1 in this case. [Terms in equation (6.14) for which the denominator is zero are to be omitted because the associated terms in problem (6.3b) do not define z?.] Considering conditions (6.9) and comparing equations (6.13) and (6.14), with denominators replaced by 1, to equations (6.6) and (6.5) suggests H*" and v"" are approximations to u* and v*. The following algorithm is based on this observation. LawsonCharalambous Algorithm Step 0 Set all v<« and *? = 1; set = 1 and choose e > 0. Step 1 Find a minimizer X"> for ( ; <>, <>). Then set vi+> and according to equations (6.11) and (6.12). Step 2 Calculate Xq = MM{X) according to equation (6.2) with p = 2 and set Xo = H v<,+)w„(X«,a,) Il y=i inI m + 11 ! >2, ( >, ,)). If (Xo Xo)/Xo < «, Stop. Otherwise, set = /+1 and return to Step 1. The lower bound on MM{X*) in Step 2 is justified in the Appendix, mathematical note 6.1. Use of the algorithm will be illustrated by two examples. Let us return to Example 6.1 wherein M(X) of equation (6.1) 1"; Table 6.1 Results for Example 6.1. Iterations  1 2 10 20 50    3.9101 3.8167 3.5153 3.5132 3.4869  3.4771   3.2076 3.2884 3.4420 3.4549 3.4705  3.4738   (0.00005, 0.00287, 0.14351, 0.51925, 0.33433)   A"» =  (5.6916, 3.5987)   MCA"") =  x„ = 3.4771 is within 0.1% of ( *)  
is to be minimized. In this singlefacility case (see Exercise 1.2), the squaredEuclidean problem (6.10) is solved by setting: rt n = 1 w,,, Uju/ 1 Wnj, = 1,2. Progress of the algorithm is traced in Table 6.1 where e = lO\ Although 78 iterations were required, the computational time was minimal. This was expected because the computational work per iteration is minimal. Only vj, v,, and v, appear to be nonzero. From equation (6.7) we conclude e2(X*,a,) = £2(X*,a,) = £2{X*,as) = xj and the minimum covering circle for a„ a, and is the minimum covering circle for all the points. To verify this and to obtain a more precise answer, the algorithm could be continued from the 78* iteration excluding points a, and  This would further reduce the computational burden per iteration. That the right points were excluded could be verified upon termination of this second phase. The ElzingaHearn algorithm extended to weighted problems has been shown to be more efficient than the present algorithm for singlefacility problems, but hasnt been extended to multifacility problems. This, plus the simplicity of the present algorithm, accounts for its elaboration here; see Reference Notes for competing algorithms. Example 6.2 Returning to the context of Example 6.1, suppose there are now eight transmitters. Receiver 1 is to monitor the signals of transmitters 1,12, 3, and 4, whereas receiver 2 is to monitor the remaining signals. Furthermore, receiver 2 also serves as a transmitter of data to receiver 1 with intensity indicated by Wjij = l/V/2 = 1.5. Other transmitter intensities are indicated by: 33330000 00003333 Transmitter locations are a, = (4,5), aj = (2,2), a, = (3,3), = (5,1), a, = (7,6), = (8,4), a, = (9,3), and = (9,6). Table 6.2 summarizes results obtained with ( = 10
Table 6.2 Results for Example 6.2. Iterations         6.8718 5.6938  6.5369 5.9450  6.4389 6.2562  6.4373 6.3259  6.4178 6.3910  6.4107 6.4101 
V,, = 0.412507, V,, = 0.210835, v,, V25 = v,6 = 0.0, v„ = 0.000158, 0.0, v,4 = 0.375364, 0.000367, M„ = 0.000769, ,, ( „ ,) = (6.410706, 6.409900, 2.892103, 6.410720), where j = 1,2,3,4, WnMX2,a) = (4.901313, 1.835351, 6.059212, 6.059135), where j = 5,6,7,8, wMXM = 6.059177 X, = (3.954334,2.863586) and = (7.647476,4.500006). Only v,„ v,2, and Vm appear to be nonzero. Figure 6.3 confirms this. The location for receiver 1 is uniquely determined, while receiver 2 can be located anywhere in the shaded area labeled C{;). Indeed, with computer graphics capability such a figure could be part of the output of the computer package that implements the algorithm. Define indicators of inactive constraints in problem (6.3a) in the following way. Let: Ki.j) I " ) < (1 0 < " /(;) = {( >) I w.MW,! < (l«i) 0 M9 < «ib Figure 6.3 Graphic Solution of Example 6.2. + marks the location of a receiver where, for example, t, = 0.01. As X"> approaches X* elements of /„ and /, will indicate which of the constraints x » vv,„jP2(Af*,a,) and xJ > W2,r62{Xf,Xf) are inactive. A second phase of the algorithm is to exclude terms involving elements in /„ and I, and continue iterations after having terminated on the basis of e in the first phase. The algorithm tends to distinguish the sets /„ and /, rather quickly. Early exclusion of the indicated terms accelerates the algorithm and enhances numerical stability. Lawson (1965) observed that the algorithm can fail if at some iteration it occurs that a multiplier ,, or Vy becomes zero for some critical term in equation (6.7) or (6.8); ie, if the associated weighted distance term was omitted from the problem, the value of x would be reduced. In practice, this is a very unlikely prospect. That results are valid can be checked at termination by verifying that terms with ,> or v,j = 0 have weighted distances not exceeding xS The algorithm cannot fail in the singlefacility case (unless all aj are identical). The inactivity of the omitted constraints can be verified using the secondphase solution. But what of the indeterminacy of in Example 6.2? We know Xj is any point in the set Cx?), say, that is the intersection of the sets: C/(x), j = 5,6,7,8,9, where: C/(xJ) = ¹ I ,2 ( 2, ,) <*Xo} j = 5,6,7,8 and C,4) = ¹ 1 W2,MXr,X2) < xl Because location is to be guided by the minimax criterion, it would seem appropriate to choose to minimize: MiX) = max {w,22,«,), j = 5,...,8; W2MX,X2)]. Thisis a singlefacility problem that could be solved by the algorithm. However, it is intriguing that X2 as reported above minimizes M(X2). This is because X is unique, and as iterations proceed, Xf approaches Xt and begins to act like an existing facility location with respect to X2. To see the result of this, consider V27, Vjs, and ,2 from Table 6.2. When these weights are normaUzed (to sum to one), we obtain 0.122, 0.284, and 0.594, respectively. These are the weights (to three decimal digits) found using the algorithm to minimize ). The minimal value of M{X2) was 6.059621, which agrees with the information in Table 6.2. 6,2 RECTANGULAR DISTANCES Suppose that in Example 6.1 the existing facilities locations represent response neighborhoods and the facility to be located is an ambulance base. The weights might represent probabilities of demand for service (upon division by their sum). If the locational setting is an urban area,
[start] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [ 22 ] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] [49] [50] [51]

