My Solution for Combinatorics for the Fall 2015 semester, Problem Set 2. Keep your Academic Integrity plz.

Try Firefox if your web browser (e.g. Chrome, IE) does not support MathML, a part of HTML5.

Randomized Algorithms: Problem Set 2


1 Problem 1


Karger’s paper[1] notices me that we should use Karger’s algorithm to calculate the α-min-cut rather than calculate the probability Karger’s algorithm exactly returns an α-min-cut.

Our algorithm runs as follows way: record the vertex which has the minimum degree (choose arbitrary one if there are multiple vertices have minimum degree) and the edges incident to it before and after each iteration, then return the edges which incident to the vertex which has the minimum degree over all the n1 records.

Let c be the size of min-cut. Without lost of generality, we can assume the minimum degree of any graph during the iterative procedure is αc (note that the α-min-cut will be returned trivially otherwise). Since Pr[Karger’s algorithm retuns min-cut]Pr[Karger’s algorithm returns an α-min-cut], we calculate the former to lower bound the latter. Every vertex of the n vertice graph has degree at least αc, thus the graph has at least αcn/2 edges, and the probability a min-cut edge is contracted is at most 2/(αn). Therefore

Pr[Karger’s algorithm returns min-cut] i=n3(12αi)


For any α-min-cut, the probability that the i-th contraction contracts an edge in the cut is at most 2α/(ni+1). The probability that all the edges in the cut survive in the first m times contraction is at least

= exp(i=nnm+1ln(12αni+1)).

Notice that the ln factor with be undifned when 2α<ni+1. This implies we should stop when there remains 2α vertices. The probability that the edges in the cut survive in the moment is at least


We then try to parition the remained vertices into two parition. Since there are 22α/2 ways to parition the vertices, the probability the cut is returned is 212α. In conclusion, any α-min-cut is returned with probablity 212α(n2α)1, there are at most O(22α1(n2α)) α-min-cuts.

2 Problem 2


Let X1,X2,,Xn be the coin flipping results. For any 3 Xi,Xj,Xk, let Ya=XiXj,Yb=XjXk,Yc=XiXk. Then


Therefore the Yi are not mutually independent.


Let Yi=XaXb,Yj=XcXd. If {a,b}{c,d}=, Yi are trivially independent with Yj. Assuming a=d w.l.o.g.,


Similarly we can prove p,q{0,1},Pr[Yi=pYj=q]=Pr[Yi=p]×Pr[Yj=q], therefore the Yi are pairwise independent. Note


Since Yi and Yj are independent, Pr[Yi=Yj=1]=Pr[Yi=1]×Pr[Yj=1], i.e. ?[YiYj]=?[Yi]?[Yj].


???[Y] =i=1m???[Yi]+ij???(Yi,Yj)


Due to the Chebyshev’s inequality,


Let t=n and X=Y,

Pr[|Y?[Y]|n] ???[Y]n2

3 Problem 3


Let random variables Xi denote the bin i-th ball is hased into, indicator random variables Yi,j denote whether i-th ball and j-th ball are hashed into same bin, Y=i<jYi,j be # of collision. Then due to the linearity of expectation, ?[Y]=i<j?[Yi,j]. Since the hash function is pairwise independent, ?[Yi,j]=1/n,

Pr[the maximum load is larger than Cn] Pr[YC2n]
?[Y]C2n (Markov’s Inequality)

Let C50, we have Pr[the maximum load is larger than 50n]0.01, i.e. Pr[the maximum load is at most O(n)]0.99.

To generalize this argument, we define indicator random variables YI([n]k) denote whether the k balls in I collide, Y=I([n]k)YI be # of k-collision. Then due to the linearity of expectation, ?[Y]=I([n]k)?[YI]=(nk)/nk1.

Pr[the maximum load is larger than m] Pr[Y(mk)]
?[Y](mk) (Markov’s Inequality)
(nek)knk1(mk)k (Stirling’s approximation)

Let m=e100nk, we have Pr[the maximum load is larger than e100knk]0.01, i.e. Pr[the maximum load is at most O(nk)]0.99.


O(n) can not be improved if the only thing we assume is 2-universality and we cosider the number of collisions only. Since the only thing we know will be the expected number of collisions.

Discarding the collisions, the only thing we know will be the marginal distribution of the load of arbitrary single bin which is deduced by Chebyshev’s Inequality. But we still can not characterize correlations of the loads betwesn different bins. By the way, we can prove a same bound by the union bound.

Therefore I believe the bound can not be improved if the only thing we assume about the hash function is the 2-universality.

4 Problem 4


We uniformly and independently assign each vertex to S or S¯ at random. For any edge (i,j), the probability i is partitioned into S and j is partitioned into S¯, i.e. the probability wij is summed into the solution, is 1/4. Then due to the linearity of expectation,


Therefore our random sampling algorithm is 14-approximation.


Consider xi as a 0-1 indicator which indicates whether vertex i is partitioned to S. Since (i,j)E,zijxizij1xj0zij1, zij{0,1} and zij=1 if and only if xi=1xj=0. Therefore zij is a 0-1 indicator which indicates whether edge (i,j) is going from S to S¯. Finally the objective function is maximized to achieve the optimal value.


To construct an LP relaxation of the above integer programming, the constraint xi{0,1},iV is relaxed to 0xi1,iV. Let xi* and zij* be the fractional optimal solutions to the linear programming. Observe that

  • xi*[0,1],iV;

  • zij*=min{xi*,1xj*}.

Notice that the function z(x,y)=min{x,1y} is symmetric about line x+y=1, we try to make our function g(x,y)=f(x)f(1y)=(ax+b)(1ayb) (g(xi*,xj*) is the probability the edge (i,j) is going from S to S¯) have this property too. This implies g(x,y)=g(1y,1x), i.e.


Then we have a=12b, and


Due to the symmetry, we focus on the area x(0,1),y(0,1x) only. Note that z(x,y)=x in this area. Our goal, i.e. the approximation ratio function, becomes

= max0b1/2min0<x<1,0<y<1x4byy3b+b/x4b2y+2b2b2/x(by)/x+(2b2y)/x+1.

By brute force enumerating the value of b, we found that when b=1/4, the approximation ratio will be exact 1/2. Therefore we get a=1/2,b=1/4 and a good approximation ratio 1/2.



Let xi be an indicator which indicates whether uiV, the constraint i:uiSjxi1,1jm guarantees every SiS intersects with V. Finally the objective function is maximized to achieve the optimal value.


Relax the constraint xi{0,1},1in to xi[0,1],1in. Let xi* be the fracional optimal solutions to the linear programming. Try to independently choose each element at random with probability xi*. For some subset Sj, we assume it contains elements y1,y2,,yk w.l.o.g.. Then 1i=1k(1yi) is the probability the subset Sj is hit. By the linear programming constraints,


Then the value of 1i=1k(1yi) is minimized when all yi=1/k. Thus the probability that Sj is hit is


Note that R.H.S. is monotonic decreasing when k1, and limk+1(11/k)k=(e1)/e>1/2. Therefore every subset is hit with probability at least 1/2. This implies we may repeat this random choosing procedure O(logm) times.

Pr[iSiwas not hit after log22m times random choosing] <m(11/2)log22m (Union bound)

Therefore after log22m times random choosing procedures, the probability all subset was hit is at least 1/2. Besides, the expected increment of weight after each turn of the random choosing procedure is iVwixi*OPT. In conclusion, our algorithm succeeds with probability at least 1/2 and return weight O(logm)OPT expectedly.


  • 1 Karger, David R. ”Global Min-cuts in RNC, and Other Ramifications of a Simple Min-Cut Algorithm.” SODA. Vol. 93. 1993.

CC BY-SA 4.0 RA.PS2 by himemeizhi is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.