Work-Efficient Parallel Derandomization II: Optimal Concentrations via Bootstrapping
Open access
Date
2024-06Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
In this paper, we present an efficient parallel derandomization method for randomized algorithms that rely on concentrations such as the Chernoff bound. This settles a classic problem in parallel derandomization, which dates back to the 1980s. Concretely, consider the set balancing problem where m sets of size at most s are given in a ground set of size n, and we should partition the ground set into two parts such that each set is split evenly up to a small additive (discrepancy) bound. A random partition achieves a discrepancy of O(√s logm) in each set, by Chernoff bound. We give a deterministic parallel algorithm that matches this bound, using near-linear work Õ(m+n+Σi=1m |Si|) and polylogarithmic depth poly(log(mn)). The previous results were weaker in discrepancy and/or work bounds: Motwani, Naor, and Naor [FOCS'89] and Berger and Rompel [FOCS'89] achieve discrepancy s · O(√s logm) with work Õ(m+n+Σi=1m |Si|) · mΘ(1/) and polylogarithmic depth; the discrepancy was optimized to O(√s logm) in later work, e.g. by Harris [Algorithmica'19], but the work bound remained prohibitively high at Õ(m4n3). Notice that these would require a large polynomial number of processors to even match the near-linear runtime of the sequential algorithm. Ghaffari, Grunau, and Rozhon [FOCS'23] achieve discrepancy s/poly(log(nm)) + O(√s logm) with near-linear work and polylogarithmic-depth. Notice that this discrepancy is nearly quadratically larger than the desired bound and barely sublinear with respect to the trivial bound of s. Our method is different from prior work. It can be viewed as a novel bootstrapping mechanism that uses crude partitioning algorithms as a subroutine and sharpens their discrepancy to the optimal bound. In particular, we solve the problem recursively, by using the crude partition in each iteration to split the variables into many smaller parts, and then we find a constraint for the variables in each part such that we reduce the overall number of variables in the problem. The scheme relies crucially on an interesting application of the multiplicative weights update method to control the variance losses in each iteration. Our result applies to the much more general lattice approximation problem, thus providing an efficient parallel derandomization of the randomized rounding scheme for linear programs. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000680548Publication status
publishedExternal links
Book title
STOC 2024: Proceedings of the 56th Annual ACM Symposium on Theory of ComputingPages / Article No.
Publisher
Association for Computing MachineryEvent
Subject
Parallel Algorithms; DerandomizationNotes
Conference lecture held on June 28, 2024.More
Show all metadata
ETH Bibliography
yes
Altmetrics