XJTU System Optimization and Scheduling Course Assignments

西安交通大学研究生《系统优化与调度》课程作业

Assignment 1

(1)1.1.2

(a)

(1)f(x,y)=(x24)2+y2f(x,y)=(4x316,2y)2f(x,y)=(12x216002)

so there are three stationary points: (2, 0) (-2, 0) and (0, 0)

the Hesse Matrix is positive semi-definite only when |x|>43

so points (2, 0) (-2, 0) are local minimum points, point (0, 0) is not a local minimum. (2)f(2,0)=f(2,0)=0 so (2,0) and (-2,0) are global minimum points of f

let g(x,y)=f(x,y)=(x24)2y2, it is easy to know that Hesse matrix is not positive semi-definite when x=0,y=0. So the point(0,0) is not the minimum of g(x,y), which means it is also not the local maximum of f(x,y).

(b)

(3)f(x,y)=12x2+xcos yf(x,y)=(x+cos y,xsin y)

xsin y=0 only if x=0 or y=kπ (kZ)

if x=0, x+cos y=0 only if y=π2+kπ (kZ)

if y=kπ (kZ), x+cos y=0 only if x=(1)k

so the stationary points of f(x,y) are (0,π2+kπ) and ((1)k,kπ) (kZ) (4)2f(x,y)=(1sin ysin yxcos y)

(5)|(1)|>0|2f(x,y)|=xcos ysin2 y

for points (0,π2+kπ), (6)|2f(x,y)|=01=1<0 for points ((1)k,kπ), (7)|2f(x,y)|=10=1<0 so all the stationary points are not local minimum points.

function f(x,y) have no local minima

(c)

(8)f(x,y)=sin x+sin y+sin (x+y)f(x,y)=(cos x+cos (x+y),cos y+cos (x+y))2f(x,y)=(sin xsin(x+y)sin(x+y)sin (x+y)sin ysin(x+y))

(9)let{cos x+cos (x+y)=0(1)cos y+cos (x+y)=0(2)

(1)-(2): cos xcos y=0, so y=x or y=2πx

if y=x, (1) equals to cos x+cos 2x=0,

using double angle formula: cos x+2cos2 x1=0

solution: cos x=12 , 1, x=π3,π,53π

if y=2πx, (1) equals to cos x+1=0

solution: cos x=1, x=π

The stationary points are (π3,π3),(π,π),(53π,53π). (10)|2f(x,y)|=sin xsin y+(sin x+sin y)sin (x+y)

for point (π3,π3): (11)|(sin xsin(x+y))|=3|2f(x,y)|=94 its Hesse Matrix is negative definite, so point (π3,π3) is a local maximum point, max value is 323

for point (π,π): (12)|(sin xsin(x+y))|=0|2f(x,y)|=0 this point is neither a local maximum nor a local minimum.

for point (53π,53π): (13)|(sin xsin(x+y))|=3|2f(x,y)|=34 this point is neither a local maximum nor a local minimum.

(d)

(14)f(x,y)=(yx2)2x2f(x,y)=(4x34xy2x,2x2+2y)2f(x,y)=(12x24y24x4x2)

(15)let{4x34xy2x=02x2+2y=0

Thus the stationary point is (0,0), for this point: (16)|(12x24y2)|=2|2f(x,y)|=4 its Hesse Matrix is neither positive definite nor negative definite, so this stationary point is neither a local maximum nor a local minimum.

(e)

Since the conclusion of (d)——the stationary point is neither a local maximum nor a local minimum, so the minimum point could only exist on the boundary.

Consider y=1, then f(x,y)=(1x2)2x2=x4+x2+1.

In this case, the minimum point is (0,1) and the minimum is 1

Consider y=1, then f(x,y)=(1x2)2x2=x43x2+1

In this case, the minimum points are (62,1),(62,1), and the minimum is 54.

Summarizing, with the constraint 1y1 on y, there exists global minimum and the global minimum points are (62,1),(62,1), the global minimum is 54.

(2)1.1.6

suppose x=(a,b),yi=(ai,bi), then the problem could transform to: (17)min i=1mwi(aai)2+(bbi)2 suppose f(a,b)=i=1mwi(aai)2+(bbi)2, then: (18)f(x,y)=(i=1mwiaai(aai)2+(bbi)2,i=1mwibbi(aai)2+(bbi)2)2f(a,b)=(i=1mwi2(bbi)2((aai)2+(bbi)2)3i=1mwi2(aai)(bbi)((aai)2+(bbi)2)3i=1mwi2(aai)(bbi)((aai)2+(bbi)2)3i=1mwi2(aai)2((aai)2+(bbi)2)3) let f(a,b)=0, then the stationary point is (i=1mwiaii=1mwi,i=1mwibii=1mwi)

and 2f(a,b) is positive definite in the point, so it is the global minimum.

(b)

there is only one stationary point in this question, so the solution is unique

(C)

let the plane points on be the reference level

suppose the length of string in weight i is li, so the problem is: (19)min i=1mwi(||xy||li)=i=1mwi||xy||i=1mwi·li and we can see that i=1mwi·li is constant, so the problem could transform to: (20)i=1mwi||xy|| which is same as the problem in (a).

(3) 1.2.1

(a)

(21)f(x,y)=3x2+y4f(x,y)=(6x,4y3)

the start point is (1,2), so the descend direction is (6,32). Then choose m

at first, m=0,α=s=1 (22)f(1,2)f(5,30)=19810075=8100560.111060=106

then m=1α=αβ=0.5 (23)f(1,2)f(2,14)=1938428=384090.10.51060=53 then m=2,α=αβ=0.25 (24)f(1,2)f(0.5,6)=191296.75=1277.750.10.251060=26.5 then m=2,α=αβ=0.125 (25)f(1,2)f(0.25,2)=1916.18750.10.1251060=13.25 then m=3,α=αβ=0.0625 (26)f(1,2)f(0.625,0)=191.171875=17.8280.10.06251060=6.625

so the first stepsize is 0.0625, after the iteration, the point changes to (0.625,0)

(b)

when m=0,α=s=1, the result is above.

then m=1α=αβ=0.1 (27)f(1,2)f(0.4,1.2)=192.5536=16.44640.10.11060=10.6 so the first stepsize is 0.1, after the iteration, the point changes to (0.4,1.2)

(c)

(28)2f(x,y)=(60012y2)

at point (1,2), the Hesse Matrix is: (29)(60048) its inverse matrix is: (30)(1600148) so the descend direction is: (31)d=(1600148)(632)=(123) then choose the stepsize:

at first, m=0,α=s=1 (32)f(1,2)f(0,43)=193.16=15.840.111060=106 so the first stepsize is 1, and the point changes to (0,43)

Comparing with (a), the computing cost of Newton's method is less

(4) 1.2.6

(a)

PROOF: (33)f(x)=Qxb||f(x)f(y)||=||QxQy||||Q||·||xy||=L·||xy|| (b)

PROOF: (34)xk+1=xksDf(xk)=xksD(Qxkb)xk+1=(IsDQ)xk+sDb ICBS: (35)xk+1+C=A(xk+C)where  A=IsDQ,B=sDb,C=(AI)1B by iteration: (36)xk+1+C=A(xk+C)=Ak+1(x0+C)xk+1=A(xk+C)=Ak+1(x0+C)C then: (37)xk+1=(IsDQ)k+1(x0Q1b)+Q1bxk+1x=(IsDQ)k+1(x0Q1b)+Q1bQ1bxk+1x=(IsDQ)k+1(x0Q1b) so: (38)limk(xk+1x)=limk(IsDQ)k+1(x0Q1b)

(39)s(0,2/L),(IsDQ)<1

then: (40)limk(IsDQ)k+1(x0Q1b)=0limk(xk+1x)=0 Summarizing, the method converges to x=Q1b for every starting point x0 if and only if s(0,2/L).

(5) 1.3.1

(41)Q=(10.99950.99951)

(42)M=largest eigenvalue of Q=1.9995m=smallest eigenvalue of Q=0.0005f(xk+1)f(xk)(MmM+m)2=0.999

Let vmand vM be the normalized eigenvectors of Q corresponding

to m and M, respectively, and let (43)x0=smvm±sMvM,sR Then after the first iteration using the line minimization stepsize

rule,(i.e., a stepsizeα=2M+m ), we get (44)x1=MmM+m(smvmsMvM)=MmM+mx0 Therefore, starting the iterations withx0 , we have for all k (45)f(xk+1)f(xk)=(MmM+m)2

(6) 1.3.4

(46)f(x)=12(xx)Q(xx)f(x)=Q(xx)

then: (47)xk+1=xks(Q(xkx)+ek)xk+1=xk(IsQ)+sQx+sekxk+1x=(xkx)(IsQ)+sek since for any AB,||A·B||||A||·||B|| (48)||xk+1x||||(xkx)(IsQ)||+s||ek|| then: (49)||xk+1x||||xkx||·q+sδ by the iteration, (50)||xkx0||qk||x0xk||+sδ(1+q+q2+...+qk1) so: (51)||xkx0||qk||x0xk||+sδ1q i.e. (52)||xkx0||sδ1q+qk||x0xk||

(7) 1.4.1

In Newton's Method, we have: (53)xk+1=xkαk(2f(xk))1f(xk) consider x=Sy, then: (54)Syk+1=SykS2αk(2f(Syk))1Sf(Syk)yk+1=ykS1αk(2f(Syk))1f(Syk) since y=S1x, consider y0=S1x0, then: (55)y1=y0S1α0(2f(Sy0))1f(Sy0)y1=S1x0S1α0(2f(x0))1f(x0) so y1=S1x1, for yk=S1xk: (56)yk+1=ykS1αk(2f(Syk))1f(Syk)yk+1=S1xkS1αk(2f(Syk))1f(Syk)yk+1=S1xk+1 summarizing, the sequence made by Newton's Method yk=S1xk.

## (8)

PROOF:

Since f(x) is convex, (57)x1,x2Ω,λ(0,1)λf(x1)+(1λ)f(x2)f(λx1+(1λ)x2) Since γ(x) is a monotone nondecreasing function, (58)γ(λf(x1)+(1λ)f(x2))γ(f(λx1+(1λ)x2)) Since γ(x) is convex, (59)λγ(f(x1))+(1λ)γ(f(x2))γ(λf(x1)+(1λ)f(x2)) Therefore, (60)λγ(f(x1))+(1λ)γ(f(x2))γ(f(λx1+(1λ)x2)) This implies that γ(f)(x) is convex.

(9)

(a)

(61)f(x,y)=5x2+5y2xy11x+11y+11f(x,y)=(10xy11,10yx+11)

(62)let{10xy11=010yx+11=0

the solution of the equation set above is (1,1),

it is the point satisfying the first-order necessary conditions

(b)

(63)2f(x,y)=(101110)

it is easy to know the Hessian Matrix is always positive definite, so the point(1,1) is a global minimum point, and the global minimum is 0

(c)

for the steepest decent method, Dk=I (64)(Dk)122f(x,y)(Dk)12=2f(x,y)=(101110)

(65)M=11,m=9f(xk+1)f(x)f(xk)f(x)(MmM+m)2=0.01

(10)

(a)

(66)f(x,y)=x2+y2+xy3xf(x,y)=(2x+y3,x+2y)2f(x,y)=(2112)

(67)let{2x+y3=0x+2y=0

the solution of the equation set above is (2,1), and the Hessian Matrix is always positive definite,

so point (2,1) is unconstrained local minimum point.

(b)

Because Hessian Matrix is always positive definite, the function f(x,y) is convex for all point in the real space.

(c)

the minimum points must on the boundary of the constraint.

consider x=0, then f(x,y)=y2, the minimum point is (0,0) and the minimum is 0

consider y=0, then f(x,y)=x23x, the minimum point is (32,0) and the minimum is 94

therefore, the minimum point of f subject to x0,y0 is (32,0) and the minimum is 94

(d)

for the steepest decent method, Dk=I (68)(Dk)122f(x,y)(Dk)12=2f(x,y)=(2112)

(69)M=3,m=1f(xk+1)f(x)f(xk)f(x)(MmM+m)2=14

Assignment 2

(1)

Q: Consider the quadratic program (70)min 0.5xTQxbTxs.t. Ax=c Prove that x is a local minimum point if and only if it is a global minimum point.(No convexity is assumed)

Proof:

Since no convexity is assumed, we consider two cases:

  1. If Q is a positive definitive matrix. It’s obvious that x is a local minimum point if and only if it is a global minimum point.

  2. If Q is not a positive definitive matrix.

Assume x is a local minimum point, set x+d as the global minimum point.

so A(x+d)=c

and because A(x)=c

so Ad=0

so we have A(x+2d)=c

then according to the optimality condition, we have (71)f(x+d)((x+2d)(x+d))0(Q(x+d)b)d0 and we have (72)f(x+d)=0.5(x+d)Q(x+d)b(x+d)(73)=0.5xQxbx+(Q(x+d)b)d0.5dQd(74)=f(x)+(Q(x+d)b)d0.5dQd as (Q(x+d)b)d0 and 0.5dQd0

then f(x+d)f(x)

and because f(x+d) is the global minimum, so we can get that f(x+d)=f(x). and then we can conclude that the local minimum is the global minimum.

(2)

Suppose that the projected negative nablaient d is calculated satisfying (75)g=d+AqTλ and that some component λi of λ, corresponding to an inequality, is negative. Show that if the ith inequality is dropped, the projection di of the negative nablaient onto the remaining constraints is a feasible direction of decent

Proof:

Let the new direction vector di by relaxing the ith constraint and projecting the negative nablaient onto the subspace determined by remaining q1 active constraints.

Let Aq¯ denote the matrix Aq with row aj deleted.

We know that gTdi<0 .

Multiplying the equation g=di+Aq¯Tλi by di and using Aq¯di=0 we obtain: (76)0>gTdi=λiaiTdi Since λi<0 , we conclude that diTdi<0 .

So the vector di is a direction of descent, but it is a feasible direction.

(3) 3.1.9

(a)

(77)min x+y+zs.t.xyz=ρ2(x+y+z)

consider the Lagrangian Relaxation: (78)L(x,y,z,λ)=x+y+z+λ(xyzρ2(x+y+z)) let L=0, then we have (79){1+λ(yzρ2)=01+λ(xzρ2)=01+λ(xyρ2)=0xyz=ρ2(x+y+z) then we have: (80)y+zx=x+zy=x+yz so x=y=z

(b)

consider all lines parallel to the line that passes through a and b and

intersect the circle.

the maximal area appears only if the point x is the point of tangency between the parallel line and the circle.

and we can see that the line through x and x^ is also the line perpendicular to the tangent line and through point x.

so this line must passes through the center of the circle.

(c)

Fix two vertices and the corresponding side of the corresponding side of the triangle and optimize vertex and the vertex of the circle should be orthogonal to the fixed side. Hence, the optimal is obtained when the two remaining side are equal. Repeating this argument, fixing each side of the triangle in turn, we see that all sides should be equal.

(d)

Similar line of argument as in (c).

(4) 3.3.5

(a)

Let dF¯(x), so {dk}F(x), so that dkd.

And we have: (81)f(x)dk=limα0f(x+αdk)f(x)α

Because x is a constrained local minimum, sof(x+αdk)f(x)α0 .

So, f(x)dk0.

(b)

If x* is a constrained local minimum, from (a) we can get (82)f(x)dk0, d with gj(x)d0, jA(x) According to Farkas’Lemma, it is true iff there exists u such that (83)f(x)=jA(x)ujgj(x), uj0 Setting uj=0 for jA(x) .

we have the desired result.

(c)

We want to show that F¯(x)=V(x), where V(x) is the cone of first order feasible variations given by V(x)={d|gj(x)d0, jA(x}.

From Mean Value Theorem, for each jA(x) and for any dF(x) there exist ε[0,1] such that gj(x+αd)=gj(x)+αgj(x+εαd)d.

Because gj(x+αd)0, so (84)limα0αgj(x+εαd)d0 And that implies that $g_j(x^)d,jA(x^ $, so that dV(x). Therefore F(x)V(x), since V(x) is closed, so F¯(x)V(x).

From each of 1-4, we can also prove that F¯(x)V(x).

So we can get that F(x)=V(x).

(d)

We can easily get the local minimum is x=(0,0). So we have (85)g1(0,0)=(0,1)Tg2(0,0)=(0,1)T Obviously g1 and g2 are not linear, so the condition(1) is violated. Furthermore, there is no vector d=(d1,d2) such that (86)g1(0,0)d=d2<0, g1(0,0)d=d2<0

So the condition2 is violated. Similarly, condition3 is violated. The vectors g1(0,0) and g2(0,0) are linearly dependent since g1(0,0)=g2(0,0), so the condition4 is violated.

Assume u0f(x)+u1g1(x)+u2g2(x)=0, so we have: (87)(u0,u0)+(0,u1)+(0,u2)=(0,0) It follows that u0=0. So there is no Lagrange multiplier.

(e)

From the description we know that x is also a local minimum for the modified problem.

The modified problem has a single constraint g1(x)=||h(x)||2, which is active at x.

Since g1 is not linear, the condition 1 is violated.

Because g1(x)=2h(x)h(x)=0, the conditions 2 and 4 are violated. If condition 3 holds, we can prove that condition 2 is hold too, so condition 3 is violated, too. From u0g(x=u1g1(x)) and g1(x)=0, it follows that u0=0, so there is no Lagrange multiplier.

(5) 5.1.7

(a)

(88)let xi={0if the object is included in the subset1otherwise

Then the total weight and value of objects included is iwixi and iv_ix_i, respectively, and the problem can be written as $$ max f(x)={i=1}^nv_ix_i\

  1. t._{i=1}^nw_ix_iA, xi ∈ {0, 1}, i= 1, . . . , n. $$

(b)

Let f^(x)=f(x) and consider the equivalent problem of minimizing f^(x) subject to the constraints given above. Then (89)L(x,μ)=i=1nvixi+μ(i=1nwixiA) and (90)q^(u)=infxi{0,1}{i=1n(μwivi)xiμA} Note that the minimization above is a separable problem, and the minimum is attained at (91)x^i(μ)={0if μ<vi/wi1if μ>vi/wio or 1otherwise Without loss of generality, assume that the objects are ordered such that: (92)v1w1v2w2...vnwn Whenμ(vj1wj1,vjwj] for some j with 1jn, then x^i=1 for all ij.

And xi()=0 otherwise, and (93)q^(μ)=u(i=1nwiA)i=1nvi From this relation, we see that, as μ increases, the slope of q^(u) decreases from i=1nwiA to A. Therefore, if i=1nwiA>0, q^(u) is maximized when the slope of the curve goes from positive to negative. In this case, the dual optimal value q^ is attained at u=viwi, whereiis the largest i such that (94)w1+w2+...+wnA

If i=1nwiA0, then the dual optimal value q^is attained at u=0.

(c)

Consider a relaxed version of the problem of minimizing f^: $$ max f_R(x)=-_{i=1}^nv_ix_i\

  1. t._{i=1}^nw_ix_iA, xi ∈ [0, 1], i= 1, . . . , n. $$ Let fR and qR be the optimal values of the relaxed problem and its dual, respectively.

In the relaxed problem, the cost function is convex over Rn(in fact it is linear), and the constraint set is polyhedral.

Thus, according to Prop. 5.2.1, there is no duality gap, and fR=qR . The dual function of the relaxed problem is: (95)qR(μ)=infxi{0,1}{i=1n(μwivi)xiμA}

Again, qR(x)is separable and the minimum is attained at (96)x^i(μ)={0if μ<vi/wi1if μ>vi/wio or 1otherwise Thus the solution is the same as the {0,1} constrained problem for all i with μviwi. For i withμ=viwi, the value of xi is irrelevant to the dual function value. Therefore, q^(u)=qR(μ) for all μ, and thus q^=qR.

Following Example 5.1.2, it can be seen that the optimal primal and dual solution pair (x,μ) of the relaxed problem satisfies $$ μ∗ w_i = v_i ,  if 0 <x_i^*< 1,\

μ∗ w_i v_i ,  if x_i^*=0,\

μ∗ w_i v_i ,  if x_i^*= 1, $$

In fact, it is straightforward to show that there is an optimal solution of the relaxed problem such that at most one xi satisfies 0<xi<1. Consider a solution equivalent to this optimal solution with the exception that = 0 if 0 < < 1. This solution is clearly feasible for the {0, 1} constrained problem, so that we have (97)f^f^(x^)fR+maxiinvi Combining with earlier results, (98)f^qR+maxiinvi=q^+maxiinvi Since f^=f and q^=q we have the desired result.

(d)

Since the object weights and values remain the same we have from (c) that 0q(k)f(k)maxiinvi. By including in the subset each replica of an object assembled in the optimal solution to the original problem, we see that f(k)kf . It then follows that (99)limk(q(k)f(k))=0

Assignment 3

5.5.2

(100)min  f(x)=10x1+3x2s.t.  5x1+x24  x1,x2{0,1}

Using Lagrangian relaxiation:

the Lagrangian funcion is (101)L(x,μ)=10x1+3x2+μ(45x1x2)=(105μ)x1+(3μ)x2+4μ the dual problem is: (102)max  q(μ)=minx{0,1}L(x,μ)s.t.  μ0 the solution is μ=2,q=8

Using constraint relaxation:

the releaxed problem is: (103)min  f(x)=10x1+3x2s.t.  5x1+x24  x1,x2[0,1] (a) when x1=0, the prolem is: (104)min  f(x)=3x2s.t.  x24  x2[0,1] this problem is infeasible.

  1. when x1=1, the problem is: (105)min  f(x)=10+3x2s.t.  5+x24  x2[0,1] when x2=0, the value of object function is minimization. so the solution is (1,0), it is also integer solution, so the point (1,0) is the optimal point of the primal problem, minimization is 10

so the lower bound of Lagrangian relaxiation is less than constraint relaxation.

5.5.10

  1. q¯=infxX¯{f(x)+j=0r¯μjgj(x)}=inf          xXgr¯+1(x)0...gr(x)0{f(x)+j=1r¯μjgj(x)}inf          xXgr¯+1(x)0...gr(x)0{f(x)+j=1rμjgj(x)}infxX{f(x)+j=1rμjgj(x)}=q so we have q¯q, and we have q¯f by the weak duality theorem, so qq¯f

if q=f, and we haveqq¯f

so we can get q=q¯=f

  1. (106)min  f(x)=ex1x2s.t.  x1=0, x110, xX={x|x0}

(107)X¯={x|x0,x1=0}

and we can get q=0,q¯=f=1, so q<q¯=f

6.3.1

Let μ be a dual optimal solution, we obtain: (108)||μk+1μ||2||μkμ||22sk(qq(uk))+(sk)2||gk||2 where q=q(μ), sk=qq(μk)||gk||2, and we have: (109)||μk+1μ||2||μkμ||2qq(μk)||gk||2 So (110)||μk+1μ||2||μkμ||2,  k implying that {μk} is bounded.

Let C be a positive constant such that ||gk||C for all k, then (111)||μk+1μ||2+qq(μk)C2||μkμ||2 By summing these inequalities over all k, we obtain (112)1C2k=0(qq(μk))2||μ0μ||2 so limkq(μk)=q

Since μk is bounded, there exist a vector μ^ and a subsequence kkKkconverging to μ^M. By using the upper-semicontinuity of q, we have (113)limksupkKq(μk)q(μ^)q Now we show that {μk} actually converges. Let M denote the set of all dual optimal solutions. Note that M is convex (by concavity of q) and closed (by upper-semicontinuity of q). Suppose that {μk} has two distinct limit points, say μ^M and μ~M. As seen in (a), for any μM, the sequence {||μkμ||} decreases monotonically, and therefore it converges. Hence ||μ^μ||=||μ~μ|| for all μM, implying that μ^=μ~.

Let q be real-valued and concave over the entire space Rr. According to Prop. B.24 of Appendix B, since {μk} is bounded, the set k0q(μk) is bounded, and so is {gk}.

6.3.3

Suppose that limkinfk(qq(μk))>0, Then there is an ϵ>0 and large enough k¯ such that k(qq(μk))ϵ for all kk¯, so (114)(qq(μk))2ϵ2k,    kk¯ implying that (115)k=k¯(qq(μk))2ϵ2k=k¯1k= which contradicts the relation (116)k=0(qq(μk))2< (b)

As seen in 6.3.1, we have for all dual optimal solutions μ and all k (117)||μk+1μ||2||μkμ||2qq(μk)||gk||2 This relation and the inequality q(μ)q(μk)a||μμk|| yield for all k (118)||μk+1μ||2||μkμ||2a2||μkμ||2||gk||2 from which, by using supk0||gk||b, we obtain (119)||μk+1μ||2(1a2b2)||μkμ||2 and the desired relation follows.

Assignment 4

2.1

(a)

the precedence constraints graph

(b)

We cloud use dynamic program to solve this question, the computational process is following:

computational process of dp

and the makespan of this project is 39 weeks

(c)

Most cost trace

This is the most cost trace of the project, shorting each of them could save a week time.

job 7 (Walls) should be shorten most since it cost 10 weeks and all the following work must wait for its finishing.

2.4

(a)

The processor sharing is a service discipline where a number of jobs are processed simultaneously, it can be seen that the processing could be interrupt any time. For CPU, the processing already done on the preempted job is not lost, so the model of the processor sharing concept on CPU is Preemptive-Resume form.

(b)

Assume that the switch of job takes no time, there are n jobs need to be processed, each job has a weight wj and process time cj. Function g(j,t) shows if the job j has been finished on time t, if finished, g(j,t)=0; else g(j,t)=1. Function f(j,t)=1 shows that job j are processed on time t, f(j,t)=0 shows that job j are not processed on time t

We would like the cpu process the job with high weight as quickly as possible, so we get the following objective optimization problem: (120)min  j=1n0+g(j,t)wjs.t.  j=1nf(j,t)1    g(j,t)={1if0tf(j,τ)dτcj0otherwise0+f(j,τ)dτ=cj

2.9

jobs 1 2 3 4
p1j 8 6 4 12
p2j 4 9 10 6

Consider the instance of O2 || Cmax with 4 jobs. The processing times of the four jobs on the two machines are above. Find all optimal schedules and compute the makespan under an optimal schedule.

we could easily find that the sum of processing time of job 1 and job 4 on machine 1 is similiar as the sum of processing time of job 2 and job 3 on machine 2; and the sum of processing time of job 1 and job 4 on machine 21 is same as the sum of processing time of job 2 and job 3 on machine 1. so all the optimial schedule is following:

machine 1 machine 2
1->4->2->3 2->3->1->4
1->4->2->3 3->2->1->4
4->1->2->3 2->3->4->1
4->1->2->3 3->2->4->1
(2->3 or 3->2)->(1->4 or 4->1) (1->4 or 4->1)->(2->3 or 3->2)

and the minimum cost time is 30

a

Find the optimal sequence for the following 1||iwiTi problem:

Jobs 1 2 3 4 5 6 7

pi 6 18 12 10 10 17 16

wi 1 5 2 4 1 4 2

di 8 42 44 24 90 85 68

For this problem, Ti=max(Cidi,0), if we would like to minimum iwiTi, we should try to make Ci<di.

so we firstly choose the job 1 and job 4, the sequence now is 1->4, then the problem change to:

jobs 2 3 5 6 7
pi 18 12 10 17 16
wi 5 2 1 4 2
di 26 28 74 69 52

then we choose job 2 and job 3, and we can see that w2=5>w3=1 and p2+p3=30>28>26. In order to less the objective function, job 2 should be done befor job 3, so the sequence now is 1->4->2->3, the problem change to:

jobs 5 6 7
pi 10 17 16
wi 1 4 2
di 44 39 22

then we could get the fianl sequence: 1->4->2->3->7->6->5。

and the minimum of the objective funciton iwiTi=4

b

For the above problem, what is the optimal schedule if there are two identical machines?

the schedual changes to:

machine 1: 1->2->7->5

machine 2: 4->3->6

c

In applying Lagrangian relaxation to parallel machine scheduling problems, what will happen when ALL the parts are identical, i.e., have the same processing time and the same due date? Examine the problem from both the primal aspect and the dual aspect. What can be done to improve algorithm performance?

We can separate the problem to sub-problems, no matter in primal or dual aspect, if all the parts are identical. We also can consider all the parts as a whole and so do the machines and simplify the parallel-machine problem to one machine problem.