CS260: Algorithms

【CS260: Algorithms】CS260: Algorithms
Class Test sample solutions
December 2020

  1. (a) True; (b) True; (c) False; (d) False; (e) True; (f) False; (g) True; (h) True. br>2. (a) cS(1) = 5
    cS(2) = 5 + 3 + 2 = 10
    cS(3) = 5 + 3 = 8
    (b) `S(1) = 3 · 5 = 15
    `S(2) = 1 · 10 = 10
    `S(3) = 2 · 8 = 16
    LS = 15 + 10 + 16 = 41
    (c) We have:
    LS ? LS′ = (S(1) + S(2))? (S′(1) + S′(2))
    = (u[1]t[1] + u2)? (u[2]t[2] + u1)
    = u[2]t[1]? u[1]t[2],
    and hence schedule S = (1, 2) has smaller total urgency-weighted lateness than sched-
    ule S′ = (2, 1) if and only if u[2]t[1]? u[1]t[2] < 0, which is equivalent to:
    u[1]
    t[1]
    u[2]
    t[2]
    .
    (d) An algorithm. Output the schedule in which jobs are ordered by non-increasing ratios
    u[j]/t[j].
    Algorithm correctness. Firstly, we slightly generalize the analysis from part (c) to
    the result of swapping a pair of consecutive jobs in any schedule.
    Fact 1. If S = (j1, j2, . . . , jn) is a schedule and S
    ′ is the schedule obtained from S by
    swapping the order of the two jobs ji and ji+1 that are consecutive in S, then LS ?LS′ =
    u[ji+1]t[ji]? u[ji]t[ji+1].
    Proof. Note that the completion time of all jobs other than ji or ji+1 is the same in
    schedules S and S′. It follows that also the urgency-weighted lateness of jobs other
    than ji or ji+1 is the same in schedules S and S
    ′. Let T be the completion time of the
    job immediately preceding job ji in S, which is the same as the completion time of the
    job immediately preceding job ji+1 in S
    ′ (or 0 if i = 1). We then have:
    LS ? LS′ = ( S (ji) + S (ji+1))? ( S′ (ji+1) + S′ (ji))
    = u[ji] (T + t [ji]) + u[ji+1] (T + t[ji] + t[ji+1])
    ? u[ji+1] (T + t[ji+1])? u[ji] (T + t[ji+1] + t[ji])
    = u[ji+1]t[ji]? u[ji]t[ji+1] .
    1
    Without loss of generality, rename all the jobs 1, 2, . . . , n so that the output of our
    algorithm is the schedule (1, 2, . . . , n). Note that we then have:
    u[1]
    t[1]
    ≥ u[2]
    t[2]
    ≥ · · · ≥ u[n]
    t[n]
    .
    We say that a schedule is optimal if it is a schedule that minimizes the total urgency-
    weighted lateness.
    Fact 2. Schedule (1, 2, . . . , n) is optimal.
    Proof. We say that (i, k) is an inversion in schedule (j1, j2, . . . , jn) if i < k and ji > jk.
    Note that (1, 2, . . . , n) is the only schedule with 0 inversions.
    Let S? = (j1, j2, . . . , jn) be an optimal schedule with the smallest number of inversions.
    We argue that the number of inversions in S? is 0, and hence S? = (1, 2, . . . , n).
    For the sake of contradiction, suppose that there is at least one inversion in S?. Then
    there is at least one adjacent inversion (i, i + 1). Note that the schedule S? obtained
    from S? by swapping the order of jobs i and i+1 has one less inversion than S?. We show
    that the total urgency-weighted lateness of S? is no larger than that of S?, and hence S?
    is also optimal, which contradicts S? being an optimal schedule with the smallest number
    of inversions.
    Since (i, i + 1) is an inversion, we have ji > ji+1, which implies that:
    u[ji+1]
    t[ji+1]
    ≥ u[ji]
    t[ji]
    ,
    and hence u[ji+1]t[ji] ≥ u[ji]t[ji+1]. It then follows from Fact 1 that:
    LS? ? LS? = u[ji+1]t[ji]? u[ji]t[ji+1] ≥ 0 ,
    and hence schedule S is also optimal because LS? ≤ LS? .
    Asymptotic running time. If Merge Sort or Heap Sort are used to sort the jobs by
    non-increasing ratios u[j]/t[j] then the algorithm runs in O(n log n) time.
    2
  2. (a) This greedy algorithm is correct.
    Without loss of generality, renumber files in such a way that the algorithm considers them
    by increasing file number; note that then we have S[1] ≤ S[2] ≤ · · · ≤ S[n].
    Suppose that S = { i1, i2, . . . , ik } is an optimal solution, that is k is the maximum number
    of files that can be stored on the device. Without loss of generality, assume that i1 <
    i2 < · · · < ik, and note that
    ∑k
    j=1 S[ij ] ≤ C.
    Observe that for every j = 1, 2, . . . , k we have j ≤ ij , and hence S[j] ≤ S[ij ]. This implies
    that the greedy algorithm will select at least k files because
    ∑k
    j=1 S[j] ≤
    ∑k
    j=1 S[ij ] ≤ C.
    (b) This greedy algorithm is not correct.
    Consider three files of sizes 2, 2, and 3, respectively, and a device with capacity 4. The
    algorithm will select one file of size 3, while the optimal solution selects the two files of
    sizes 2 and 2.
    (c) Given an instance I of this problem:
    device capacity C;
    file sizes S[1], S[2], . . . , S[n];
    star ratings R[1], R[2], . . . , R[n];
    consider the following instance I ′ of the knapsack problem:
    knapsack capacity C;
    item weights S[1], S[2], . . . , S[n];
    item values R[1], R[2], . . . , R[n].
    It is routine to argue that a set S ? { 1, 2, . . . , n } is an optimal solution of instance I if
    and only if it is an optimal solution of instance I ′ of the knapsack problem.
    Each such instance I ′ of the knapsack problem can be solved in time O(nC), because the
    running time of the standard dynamic programming algorithm for the knapsack problem
    is O(nW ) where W is the capacity of the knapsack.
    (d) For capacity C = 2 and arrays S = [2, 1, 2] and R = [3, 2, 0], the star-rating-per-size
    ratios are 3/2, 2, and 0, and hence the algorithm will select only file 2, achieving total
    star rating 2.
    Instead, the optimal solution is to select only file 1, achieving total star rating 3.x

    推荐阅读