Scheduling Loop-free Network Updates: It’s Good to Relax!

Arne Ludwig, Jasiek Marcinkowski, Stefan Schmid

Abstract: We consider the problem of updating arbitrary routes in a software-defined network in a (transiently) loop-free manner. We are interested in fast network updates, i.e., in schedules which minimize the number of interactions (i.e., rounds) between the controller and the network nodes. We first prove that this problem is difficult in general: The problem of deciding whether a k-round schedule exists is NP-complete already for k = 3, and there are problem instances requiring Ω(n) rounds, where n is the network size. Given these negative results, we introduce an attractive, relaxed notion of loop-freedom. We prove that O(log n)-round relaxed loop-free schedules always exist, and can also be computed efficiently.

Guest: Arne Ludwig
Host: Yvonne-Anne Pignolet

Fault Tolerant BFS Structures: A Reinforcement-Backup Tradeoff

Merav Parter, David Peleg

Abstract: This paper initiates the study of fault resilient network structures that mix two orthogonal protection mechanisms: (a) backup, namely, augmenting the structure with many (redundant) low-cost but fault-prone components, and (b) reinforcement, namely, acquiring high-cost but fault-resistant components. To study the trade-off between these two mechanisms in a concrete setting, we address the problem of designing a (b,r)fault-tolerant BFS (or (b,r) FT-BFS for short) structure, namely, a subgraph H of the network G consisting of two types of edges: a set E′⊆E of r(n) fault-resistant {\em reinforcement} edges, which are assumed to never fail, and a (larger) set E(H)∖E′ of b(n) fault-prone backup edges, such that subsequent to the failure of a single fault-prone backup edge e∈E∖E′, the surviving part of H still contains an BFS spanning tree for (the surviving part of) G, satisfying dist(s,v,H∖{e})≤dist(s,v,G∖{e}) for every v∈V and e∈E∖E′. We establish the following tradeoff between b(n) andr(n): For every real ϵ∈(0,1], if r(n)=Θ~(n1−ϵ), then b(n)=Θ~(n1+ϵ) is necessary and sufficient.

Guest: Merav Parter
Host: Yvonne-Anne Pignolet

Facility Location in Evolving Metrics

David Eisenstat, Claire Mathieu, Nicolas Schabanel

Understanding the dynamics of evolving social or infrastructure networks is a challenge in applied areas such as epidemiology, viral marketing, or urban planning. During the past decade, data has been collected on such networks but has yet to be fully analyzed. We propose to use information on the dynamics of the data to find stable partitions of the network into groups. For that purpose, we introduce a time-dependent, dynamic version of the facility location problem, that includes a switching cost when a client’s assignment changes from one facility to another. This might provide a better representation of an evolving network, emphasizing the abrupt change of relationships between subjects rather than the continuous evolution of the underlying network. We show that in realistic examples this model yields indeed better fitting solutions than optimizing every snapshot independently. We present an O(log nT)-approximation algorithm and a matching hardness result, where n is the number of clients and T the number of time steps. We also give an other algorithms with approximation ratio O(log nT) for the variant where one pays at each time step (leasing) for each open facility.

Guest: Nicolas Schabanel

Host: Stefan Schmid

On Dynamic Bin Packing for Resource Allocation in the Cloud

Yusen Li, Xueyan Tang and Wentong Cai.

Abstract: Dynamic Bin Packing (DBP) is a variant of classical bin packing, which assumes that items may arrive and depart at arbitrary times. Existing works on DBP generally aim to minimize the maximum number of bins ever used in the packing. In this paper, we consider a new version of the DBP problem, namely, the MinTotal DBP problem which targets at minimizing the total cost of the bins used over time. It is motivated by the request dispatching problem arising in cloud gaming systems. We analyze the competitive ratios of the commonly used First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a new bin only when no currently opened bin can accommodate the item to be packed) algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any Fit packing cannot be better than the max/min item interval length ratio μ. The competitive ratio of Best Fit packing is not bounded for any given μ. For the general case, First Fit packing has a competitive ratio of 2μ + 13. We also propose a Modified First Fit packing algorithm that can achieve a competitive ratio of 8⁄7μ + 55⁄7 when μ is not known and can achieve a competitive ratio of μ + 8 when μ is known. Guest: Yusen Li, Nanyang Technological University, Singapore Host: Yvonne-Anne Pignolet

The power of two choices in distributed voting

Colin Cooper, Robert Elsässer, and Tomasz Radzik

Abstract: Distributed voting is a fundamental topic in distributed computing. In pull voting, in each step every vertex chooses a neighbour uniformly at random, and adopts its opinion. The voting is completed when all vertices hold the same opinion. On many graph classes including regular graphs, pull voting requires Θ(n) expected steps to complete, even if initially there are only two distinct opinions.
In this paper we consider a related process which we call two-sample voting: every vertex chooses two random neighbours in each step. If the opinions of these neighbours coincide, then the vertex revises its opinion according to the chosen sample. Otherwise, it keeps its own opinion. We consider the performance of this process in the case where two di?erent opinions reside on vertices of some (arbitrary) sets A and B, respectively. Here, |A|+|B|=n is the number of vertices of the graph.
We show that there is a constant K such that if the initial imbalance between the two opinions is of a certain value then with high probability two sample voting completes in a random d regular graph in O(logn) steps and the initial majority opinion wins. We also show the same performance for any regular graph, for some bounds of the second largest eigenvalue of the transition matrix. In the graphs we consider, standard pull voting requires Ω(n) steps, and the minority can still win with probability |B|/n.

Guest: Robert Elsässes, Universität Salzburg, http://uni-salzburg.at/index.php?id=53909
Host: Yvonne-Anne Pignolet

Scheduling Shared Continuous Resources on Many-Cores

Andre Brinkmann, Peter Kling, Friedhelm Meyer Auf der Heide, Lars Nagel, Sören Riechers and Tim Süs

Abstract: We consider the problem of scheduling a number of jobs on m identical processors sharing a continuously divisible resource. Each job jcomes with a resource requirement rj∈[0,1]. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their processing speeds, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature.

In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.

Guest: Sören Riechers, Universität Paderborn, https://www.hni.uni-paderborn.de/alg/mitarbeiter/soerenri/

Host: Yvonne-Anne Pignolet

Parallel Peeling Algorithms

Justin Thaler, Michael Mitzenmacher, Jiayang Jiang

The analysis of several algorithms and data structures can be framed as a peeling
process on a random hypergraph: vertices with degree less than k are removed
until there are no vertices of degree less than k left. The remaining hypergraph
is known as the k-core. In this paper, we analyze parallel peeling processes,
where in each round, all vertices of degree less than k are removed. It is
known that, below a specific edge density threshold, the k-core is empty
with high probability. We show that, with high probability, below this threshold,
only 1 / log((k-1)(r-1)) log log n + O(1) rounds of peeling are needed to obtain
the empty k-core for r-uniform hypergraphs. Interestingly, we show
that above this threshold, Ω(log n) rounds of peeling are required to find the non-empty
k-core. Since most algorithms and data structures aim to peel to an empty k-core,
this asymmetry appears fortunate. We verify the theoretical results both with simulation
and with a parallel implementation using graphics processing units (GPUs). Our
implementation provides insights into how to structure parallel peeling algorithms
for efficiency in practice.

Guest: Michael Mitzenmacher
Host: Stefan Schmid

Balanced Allocations and Double Hashing

Michael Mitzenmacher

With double hashing, for an item x, one generates two hash values f(x) and g(x),
and then uses combinations (f(x) + ig(x)) mod n for i = 0, 1, 2, … to generate
multiple hash values from the initial two. We show that the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of
d choices, as well as several related variants. We perform an empirical study,
and consider multiple theoretical approaches. While several techniques can be used to
show asymptotic results for the maximum load, we demonstrate how fluid limit methods
explain why the behavior of double hashing and fully random hashing are essentially
indistinguishable in this context.

Guest: Michael Mitzenmacher
Host: Stefan Schmid

Fault Tolerant Approximate BFS Trees

Merav Parter and David Peleg

Abstract: A fault-tolerant structure for a network is required to continue functioning following the failure of some of the network’s edges or vertices. This paper considers breadth- first search (BFS) spanning trees, and addresses the problem of designing a sparse fault-tolerant BFS tree, or FT-BFS tree for short, namely, a sparse subgraph T of the given network G such that subsequent to the failure of a single edge or vertex, the surviving part T’ of T still contains a BFS spanning tree for (the surviving part of) G. For a source node s, a target node t and an edge e in G, the shortest s-t path P that does not go through e is known as a replacement path. Thus, our FT-BFS tree contains the collection of all replacement paths Ps,t,e for every t in V (G) and every failed edge e in E(G).

Our main results are as follows. We present an algorithm that for every n-vertex graph G and source node s constructs a (single edge failure) FT-BFS tree rooted at s with O(n min(Depth(s), sqrt(n)) edges where Depth(s) is the depth of the BFS tree rooted at s. This result is complemented by a matching lower bound, showing that there exist n-vertex graphs with a source node s for which any edge (or vertex) FT-BFS tree rooted at s has Ω(n3/2) edges. We then consider fault-tolerant multi-source BFS trees, or FT-MBFS trees for short, aiming to provide (following a failure) a BFS tree rooted at each source s in S for some subset of sources. Again, tight bounds are provided, showing that there exists a poly-time algorithm that for every n-vertex graph and source set S of size σ constructs a (single failure) FT-MBFS tree T*(S) from each source si in S, with O(sqrt(σ)n3/2) edges, and on the other hand there exist n-vertex graphs with source sets S of cardinality σ, on which any FT-MBFS tree from S has Ω(sqrt(σ)n3/2) edges.

Finally, we propose an O(log n) approximation algorithm for constructing FT-BFS and FT-MBFS structures. The latter is complemented by a hardness result stating that there exists no (log n) approximation algorithm for these problems under standard complexity assumptions. In comparison with the randomized FT-BFS construction implicit in [14], our algorithm is deterministic and may improve the number of edges by a factor of up to sqrt(n) for some instances. All our algorithms can be extended to deal with one vertex failure as well, with the same performance.

Guest: Merav Parter
Host: Stefan Schmid

A Polynomial-time Approximation Scheme for Fault-tolerant Distributed Storage

Constantinos Daskalakis, Anindya De, Ilias Diakonikolas, Ankur Moitra, and Rocco Servedio

Abstract: We consider a problem which has received considerable attention in systems literature because of its applications to routing in delay tolerant networks and replica placement in distributed storage systems. We provide an additive EPTAS for this problem. Our approach is inspired by, and extends, recent structural results from the complexity-theoretic study of linear threshold functions. Furthermore, in spite of the objective function being non-smooth, we give a unicriterion PTAS while previous work for such objective functions has typically led to a bicriterion PTAS. We believe our techniques may be applicable to get unicriterion PTAS for other non-smooth objective functions.

Guest: Anindya De, Berkeley, USA
Host: Stefan Schmid