Xjtu System Optimization And Scheduling Course Assignments
Posted onEdited onInStudyViews: 20Waline: Word count in article: 25kReading time ≈23 mins.
西安交通大学研究生《系统优化与调度》课程作业
Assignment 1
(1)1.1.2
(a)
so there are three stationary points: (2, 0) (-2, 0) and (0, 0)
the Hesse Matrix is positive semi-definite only when
so points (2, 0) (-2, 0) are local minimum points, point (0, 0) is
not a local minimum. so (2,0) and (-2,0) are global minimum points of f
let , it is easy
to know that Hesse matrix is not positive semi-definite when . So the point(0,0) is not the
minimum of , which means it
is also not the local maximum of .
(b)
only if or
if , only if
if , only if
so the stationary points of are and
for points ,
for points , so all the stationary points are not local minimum points.
function have no local
minima
(c)
(1)-(2): , so
if , (1) equals to ,
using double angle formula:
solution: ,
if , (1) equals to
solution: ,
The stationary points are .
for point :
its Hesse Matrix is negative definite, so point is a local maximum
point, max value is
for point : this point is neither a local maximum nor a local minimum.
for point : this point is neither a local maximum nor a local minimum.
(d)
Thus the stationary point is , for this point: its Hesse Matrix is neither positive definite nor negative
definite, so this stationary point is neither a local maximum nor a
local minimum.
(e)
Since the conclusion of (d)——the stationary point is neither a local
maximum nor a local minimum, so the minimum point could only exist on
the boundary.
Consider , then .
In this case, the minimum point is and the minimum is 1
Consider , then
In this case, the minimum points are ,
and the minimum is .
Summarizing, with the constraint on , there exists
global minimum and the global minimum points are ,
the global minimum is .
(2)1.1.6
suppose ,
then the problem could transform to: suppose ,
then: let , then
the stationary point is
and is positive
definite in the point, so it is the global minimum.
(b)
there is only one stationary point in this question, so the solution
is unique
(C)
let the plane points on be the reference level
suppose the length of string in weight is , so the problem is: and we can see that is constant, so the
problem could transform to: which is same as the problem in (a).
(3) 1.2.1
(a)
the start point is , so
the descend direction is . Then choose
at first,
then , then then then
so the first stepsize is 0.0625, after the iteration, the point
changes to
(b)
when , the result
is above.
then , so the first stepsize is 0.1, after the iteration, the point
changes to
(c)
at point , the Hesse
Matrix is: its inverse matrix is: so the descend direction is: then choose the stepsize:
at first, so the first stepsize is 1, and the point changes to
Comparing with (a), the computing cost of Newton's method is less
(4) 1.2.6
(a)
PROOF: (b)
PROOF: ICBS: by iteration: then: so:
then: Summarizing, the method converges to for every starting point
if and only if .
(5) 1.3.1
Let and be the normalized eigenvectors of Q
corresponding
to m and M, respectively, and let Then after the first iteration using the line minimization
stepsize
rule,(i.e., a stepsize ), we get Therefore, starting the iterations with , we have for all k
(6) 1.3.4
then: since for any then: by the iteration, so: i.e.
(7) 1.4.1
In Newton's Method, we have: consider , then:
since , consider
, then: so , for
: summarizing, the sequence made by Newton's Method .
## (8)
PROOF:
Since is convex, Since is a
monotone nondecreasing function, Since is
convex, Therefore, This implies that is convex.
(9)
(a)
the solution of the equation set above is ,
it is the point satisfying the first-order necessary conditions
(b)
it is easy to know the Hessian Matrix is always positive definite, so
the point is a global minimum
point, and the global minimum is 0
(c)
for the steepest decent method,
(10)
(a)
the solution of the equation set above is , and the Hessian Matrix is always
positive definite,
so point is unconstrained
local minimum point.
(b)
Because Hessian Matrix is always positive definite, the function
is convex for all point in
the real space.
(c)
the minimum points must on the boundary of the constraint.
consider , then , the minimum point is and the minimum is
consider , then , the minimum point is and the minimum is
therefore, the minimum point of f subject to is and the minimum is
(d)
for the steepest decent method,
Assignment 2
(1)
Q: Consider the quadratic program Prove that is a local
minimum point if and only if it is a global minimum point.(No convexity
is assumed)
Proof:
Since no convexity is assumed, we consider two cases:
If Q is a positive definitive matrix. It’s obvious that is a local minimum point if and only
if it is a global minimum point.
If Q is not a positive definitive matrix.
Assume is a local minimum
point, set as the global
minimum point.
so
and because
so
so we have
then according to the optimality condition, we have and we have as and
then
and because is the
global minimum, so we can get that . and then we can conclude
that the local minimum is the global minimum.
(2)
Suppose that the projected negative nablaient d is calculated
satisfying and that some component of , corresponding to an inequality,
is negative. Show that if the th
inequality is dropped, the projection of the negative nablaient onto the
remaining constraints is a feasible direction of decent
Proof:
Let the new direction vector
by relaxing the th constraint and
projecting the negative nablaient onto the subspace determined by
remaining active
constraints.
Let denote the matrix
with row deleted.
We know that .
Multiplying the equation by and
using we obtain:
Since , we
conclude that .
So the vector is a direction
of descent, but it is a feasible direction.
(3) 3.1.9
(a)
consider the Lagrangian Relaxation: let , then we
have then we have: so
(b)
consider all lines parallel to the line that passes through and and
intersect the circle.
the maximal area appears only if the point is the point of tangency between the
parallel line and the circle.
and we can see that the line through and is also the line perpendicular to the tangent line and
through point .
so this line must passes through the center of the circle.
(c)
Fix two vertices and the corresponding side of the corresponding side
of the triangle and optimize vertex and the vertex of the circle should
be orthogonal to the fixed side. Hence, the optimal is obtained when the
two remaining side are equal. Repeating this argument, fixing each side
of the triangle in turn, we see that all sides should be equal.
(d)
Similar line of argument as in (c).
(4) 3.3.5
(a)
Let , so , so that
.
And we have:
Because is a constrained
local minimum, so .
So, .
(b)
If x* is a constrained local minimum, from (a) we can get According to Farkas’Lemma, it is true iff there exists such that Setting for .
we have the desired result.
(c)
We want to show that , where
is the cone of first order feasible variations given by .
From Mean Value Theorem, for each and for any there exist such that .
Because ,
so And that implies that $g_j(x^)d,jA(x^ $, so that
. Therefore , since is closed, so .
From each of 1-4, we can also prove that .
So we can get that .
(d)
We can easily get the local minimum is . So we have Obviously g1 and g2 are not linear, so the condition(1) is
violated. Furthermore, there is no vector such that
So the condition2 is violated. Similarly, condition3 is violated. The
vectors and are linearly dependent
since , so the condition4 is violated.
Assume , so we have: It follows that . So
there is no Lagrange multiplier.
(e)
From the description we know that is also a local minimum for the
modified problem.
The modified problem has a single constraint , which is active at
.
Since is not linear, the
condition 1 is violated.
Because , the conditions 2 and 4 are violated. If
condition 3 holds, we can prove that condition 2 is hold too, so
condition 3 is violated, too. From
and , it follows
that , so there is no
Lagrange multiplier.
(5) 5.1.7
(a)
Then the total weight and value of objects included is and iv_ix_i,
respectively, and the problem can be written as $$
max f(x)={i=1}^nv_ix_i\
t._{i=1}^nw_ix_iA, xi ∈ {0, 1}, i= 1, . . . , n. $$
(b)
Let and consider
the equivalent problem of minimizing subject to the constraints given above. Then and Note that the minimization above is a separable problem, and
the minimum is attained at Without loss of generality, assume that the objects are
ordered such that: When
for some with , then for all .
And otherwise, and From this relation, we see that, as μ increases, the slope of
decreases from to . Therefore, if ,
is maximized when the
slope of the curve goes from positive to negative. In this case, the
dual optimal value is
attained at , whereis the largest such that
If , then
the dual optimal value is
attained at .
(c)
Consider a relaxed version of the problem of minimizing : $$
max f_R(x)=-_{i=1}^nv_ix_i\
t._{i=1}^nw_ix_iA, xi ∈ [0, 1], i= 1, . . . , n. $$ Let and be the optimal values of the
relaxed problem and its dual, respectively.
In the relaxed problem, the cost function is convex over (in fact it is linear), and the
constraint set is polyhedral.
Thus, according to Prop. 5.2.1, there is no duality gap, and . The dual function of the
relaxed problem is:
Again, is separable and
the minimum is attained at Thus the solution is the same as the constrained problem for all
with . For with, the value of is irrelevant to the dual function
value. Therefore, for all μ, and thus .
Following Example 5.1.2, it can be seen that the optimal primal and
dual solution pair of
the relaxed problem satisfies $$ μ∗ w_i = v_i , if 0 <x_i^*<
1,\
μ∗ w_i v_i , if x_i^*=0,\
μ∗ w_i v_i , if x_i^*= 1, $$
In fact, it is straightforward to show that there is an optimal
solution of the relaxed problem such that at most one satisfies . Consider a solution
equivalent to this optimal solution with the exception that = 0 if 0
< < 1. This solution is clearly feasible for the {0, 1}
constrained problem, so that we have Combining with earlier results, Since and
we have the desired
result.
(d)
Since the object weights and values remain the same we have from (c)
that . By including in the subset each replica of an
object assembled in the optimal solution to the original problem, we see
that . It then
follows that
Assignment 3
5.5.2
Using Lagrangian relaxiation:
the Lagrangian funcion is the dual problem is: the solution is
Using constraint relaxation:
the releaxed problem is: (a) when , the
prolem is: this problem is infeasible.
when , the problem is:
when , the value of
object function is minimization. so the solution is , it is also integer solution, so
the point is the optimal
point of the primal problem, minimization is 10
so the lower bound of Lagrangian relaxiation is less than constraint
relaxation.
5.5.10
so we have , and we have by the weak duality theorem, so
if , and we have
so we can get
and we can get , so
6.3.1
Let be a dual optimal
solution, we obtain: where , , and
we have: So implying that
is bounded.
Let be a positive constant
such that for all , then By summing these inequalities over all , we obtain so
Since is bounded, there
exist a vector and a
subsequence converging to . By using the
upper-semicontinuity of q, we have Now we show that actually converges. Let denote the set of all dual optimal
solutions. Note that is convex
(by concavity of ) and closed (by
upper-semicontinuity of ). Suppose
that has two distinct
limit points, say
and . As seen
in (a), for any , the
sequence
decreases monotonically, and therefore it converges. Hence for all , implying that .
Let be real-valued and concave
over the entire space .
According to Prop. B.24 of Appendix B, since is bounded, the set is bounded, and so is
.
6.3.3
Suppose that ,
Then there is an and
large enough such that for
all , so implying that which contradicts the relation (b)
As seen in 6.3.1, we have for all dual optimal solutions and all This relation and the inequality
yield for all from which, by using , we obtain and the desired relation follows.
Assignment 4
2.1
(a)
the precedence constraints
graph
(b)
We cloud use dynamic program to solve this question, the
computational process is following:
computational process of dp
and the makespan of this project is 39 weeks
(c)
Most cost trace
This is the most cost trace of the project, shorting each of them
could save a week time.
job 7 (Walls) should be shorten most since it cost 10 weeks and all
the following work must wait for its finishing.
2.4
(a)
The processor sharing is a service discipline where a number of jobs
are processed simultaneously, it can be seen that the processing could
be interrupt any time. For CPU, the processing already done on the
preempted job is not lost, so the model of the processor sharing concept
on CPU is Preemptive-Resume form.
(b)
Assume that the switch of job takes no time, there are jobs need to be processed, each job has
a weight and process time . Function shows if the job has been finished on time , if finished, ; else . Function shows that job are processed on time t, shows that job are not processed on time t
We would like the cpu process the job with high weight as quickly as
possible, so we get the following objective optimization problem:
预览: