Scheduling Loop-free Network Updates: It’s Good to Relax!

Arne Ludwig, Jasiek Marcinkowski, Stefan Schmid

Abstract: We consider the problem of updating arbitrary routes in a software-defined network in a (transiently) loop-free manner. We are interested in fast network updates, i.e., in schedules which minimize the number of interactions (i.e., rounds) between the controller and the network nodes. We first prove that this problem is difficult in general: The problem of deciding whether a k-round schedule exists is NP-complete already for k = 3, and there are problem instances requiring Ω(n) rounds, where n is the network size. Given these negative results, we introduce an attractive, relaxed notion of loop-freedom. We prove that O(log n)-round relaxed loop-free schedules always exist, and can also be computed efficiently.

Guest: Arne Ludwig
Host: Yvonne-Anne Pignolet

Fault Tolerant BFS Structures: A Reinforcement-Backup Tradeoff

Merav Parter, David Peleg

Abstract: This paper initiates the study of fault resilient network structures that mix two orthogonal protection mechanisms: (a) backup, namely, augmenting the structure with many (redundant) low-cost but fault-prone components, and (b) reinforcement, namely, acquiring high-cost but fault-resistant components. To study the trade-off between these two mechanisms in a concrete setting, we address the problem of designing a (b,r)fault-tolerant BFS (or (b,r) FT-BFS for short) structure, namely, a subgraph H of the network G consisting of two types of edges: a set E′⊆E of r(n) fault-resistant {\em reinforcement} edges, which are assumed to never fail, and a (larger) set E(H)∖E′ of b(n) fault-prone backup edges, such that subsequent to the failure of a single fault-prone backup edge e∈E∖E′, the surviving part of H still contains an BFS spanning tree for (the surviving part of) G, satisfying dist(s,v,H∖{e})≤dist(s,v,G∖{e}) for every v∈V and e∈E∖E′. We establish the following tradeoff between b(n) andr(n): For every real ϵ∈(0,1], if r(n)=Θ~(n1−ϵ), then b(n)=Θ~(n1+ϵ) is necessary and sufficient.

Guest: Merav Parter
Host: Yvonne-Anne Pignolet

On Dynamic Bin Packing for Resource Allocation in the Cloud

Yusen Li, Xueyan Tang and Wentong Cai.

Abstract: Dynamic Bin Packing (DBP) is a variant of classical bin packing, which assumes that items may arrive and depart at arbitrary times. Existing works on DBP generally aim to minimize the maximum number of bins ever used in the packing. In this paper, we consider a new version of the DBP problem, namely, the MinTotal DBP problem which targets at minimizing the total cost of the bins used over time. It is motivated by the request dispatching problem arising in cloud gaming systems. We analyze the competitive ratios of the commonly used First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a new bin only when no currently opened bin can accommodate the item to be packed) algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any Fit packing cannot be better than the max/min item interval length ratio μ. The competitive ratio of Best Fit packing is not bounded for any given μ. For the general case, First Fit packing has a competitive ratio of 2μ + 13. We also propose a Modified First Fit packing algorithm that can achieve a competitive ratio of 8⁄7μ + 55⁄7 when μ is not known and can achieve a competitive ratio of μ + 8 when μ is known. Guest: Yusen Li, Nanyang Technological University, Singapore Host: Yvonne-Anne Pignolet

The power of two choices in distributed voting

Colin Cooper, Robert Elsässer, and Tomasz Radzik

Abstract: Distributed voting is a fundamental topic in distributed computing. In pull voting, in each step every vertex chooses a neighbour uniformly at random, and adopts its opinion. The voting is completed when all vertices hold the same opinion. On many graph classes including regular graphs, pull voting requires Θ(n) expected steps to complete, even if initially there are only two distinct opinions.
In this paper we consider a related process which we call two-sample voting: every vertex chooses two random neighbours in each step. If the opinions of these neighbours coincide, then the vertex revises its opinion according to the chosen sample. Otherwise, it keeps its own opinion. We consider the performance of this process in the case where two di?erent opinions reside on vertices of some (arbitrary) sets A and B, respectively. Here, |A|+|B|=n is the number of vertices of the graph.
We show that there is a constant K such that if the initial imbalance between the two opinions is of a certain value then with high probability two sample voting completes in a random d regular graph in O(logn) steps and the initial majority opinion wins. We also show the same performance for any regular graph, for some bounds of the second largest eigenvalue of the transition matrix. In the graphs we consider, standard pull voting requires Ω(n) steps, and the minority can still win with probability |B|/n.

Guest: Robert Elsässes, Universität Salzburg, http://uni-salzburg.at/index.php?id=53909
Host: Yvonne-Anne Pignolet

Scheduling Shared Continuous Resources on Many-Cores

Andre Brinkmann, Peter Kling, Friedhelm Meyer Auf der Heide, Lars Nagel, Sören Riechers and Tim Süs

Abstract: We consider the problem of scheduling a number of jobs on m identical processors sharing a continuously divisible resource. Each job jcomes with a resource requirement rj∈[0,1]. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their processing speeds, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature.

In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.

Guest: Sören Riechers, Universität Paderborn, https://www.hni.uni-paderborn.de/alg/mitarbeiter/soerenri/

Host: Yvonne-Anne Pignolet

Lower Bounds for Distinct Elements in the Message Passing Model

David Woodruff and Qin Zhang

This interview introduces the results of two papers, one published at SODA 2014, one at DISC 2013.

——– SODA 2014 ————
Title: An Optimal Lower Bound for Distinct Elements in the Message Passing Model

Abstract: In the message-passing model of communication, there are k players each with their own private input, who try to compute or approximate a function of their inputs by sending messages to one another over private channels. We consider the setting in which each player holds a subset S_i of elements of a universe of size n, and their goal is to output a (1+\eps)-approximation to the total number of distinct elements in the union of the sets S_i with constant probability, which can be amplified by independent repetition. This problem has applications in data mining, sensor networks, and network monitoring. We resolve the communication complexity of this problem up to a constant factor, for all settings of n, k and \eps, by showing a lower bound of \Omega(k \cdot \min(n, 1/\eps^2) + k \log n) bits. This improves upon previous results, which either had non-trivial restrictions on the relationships between the values of n, k and \eps, or were suboptimal by logarithmic factors, or both.

———- DISC 2013 —————–
Title: When Distributed Computation is Communication Expensive

Abstract: We consider a number of fundamental statistical and graph problems in the message-passing model, where we have k machines (sites), each holding a piece of data, and the machines want to jointly solve a problem defined on the union of the k data sets. The communication is point-to-point, and the goal is to minimize the total communication among the k machines. This model captures all point-to-point distributed computational models with respect to minimizing communication costs. Our analysis shows that exact computation of many statistical and graph problems in this distributed setting requires a prohibitively large amount of communication, and often one cannot improve upon the communication of the simple protocol in which all machines send their data to a centralized server. Thus, in order to obtain protocols that are communication-efficient, one has to allow approximation, or investigate the distribution or layout of the data sets.
Guest: Qin Zhang
Host: Yvonne-Anne Pignolet

Improved bounds and algorithms for graph cuts and network reliability

David Harris and Aravind Srinivasan

Abstract: Karger developed the first fully-polynomial approximation scheme to estimate the probability that a graph G becomes disconnected, given that its edges are removed independently with probability p. We improve this runtime in two key ways, one algorithmic and one graph-theoretic. From an algorithmicpoint of view, there is a certain key sub-problem encountered by Karger, for which a generic estimation procedureis employed. We show that this sub-problem has a special structure for which a much more efficient algorithm can be used. From a graph-theoretic point of view, we show better bounds on the number of edge cuts which are likely to fail. Karger’s analysis depends on bounds for various graph parameters; we show that these bounds cannot be simultaneously tight. We describe a new graph parameter, which simultaneously influences all the bounds used by Karger, and use it to obtain much tighter estimates of the behavior of the cuts of G.

Guest: David Harris
Host: Yvonne-Anne Pignolet

A Super-Fast Distributed Algorithm for Bipartite Metric Facility Location

James Hegeman and Sriram Pemmaraju

Abstract:  The facility location problem consists of a set of facilities F, a set of clients C, an opening cost f_i associated with each facility x_i, and a connection cost D(x_i,y_j) between each facility x_i and client y_j. The goal is to find a subset of facilities to open, and to connect each client to an open facility, so as to minimize the total facility opening costs plus connection costs. This paper presents the first expected-sub-logarithmic-round distributed O(1)-approximation algorithm in the CONGEST model for the metric facility location problem on the complete bipartite network with parts F and C. Our algorithm has an expected running time of O((log log n)^3) rounds, where n = |F| + |C|. This result can be viewed as a continuation of our recent work (ICALP 2012) in which we presented the first sub-logarithmic-round distributed O(1)-approximation algorithm for metric facility location on a clique network. The bipartite setting presents several new challenges not present in the problem on a clique network. We present two new techniques to overcome these challenges. (i) In order to deal with the problem of not being able to choose appropriate probabilities (due to lack of adequate knowledge), we design an algorithm that performs a random walk over a probability space and analyze the progress our algorithm makes as the random walk proceeds. (ii) In order to deal with a problem of quickly disseminating a collection of messages, possibly containing many duplicates, over the bipartite network, we design a probabilistic hashing scheme that delivers all of the messages in expected-O(log log n) rounds.

Guest: James Hegeman
Host: Yvonne-Anne Pignolet

Sleeping Experts in Wireless Networks

Johannes Dams, Martin Hoefer and Thomas Kesselheim

Abstract: We consider capacity maximization algorithms for wireless networks with changing availabilities of spectrum. There are n sender-receiver pairs (called links) and k channels. We consider an iterative round-based scenario, where in each round the set of channels available to each link changes. Each link independently decides about access to one available channel in order to implement a successful transmission. Transmissions are subject to interference and noise, and we use a general approach based on affectance to define which attempts are successful. This includes recently popular interference models based on SINR.
Our main result is that efficient distributed algorithms from sleeping-expert regret learning can be used to obtain constant-factor approximations if channel availability is stochastic and independently distributed among links. In general, sublinear approximation factors cannot be obtained without the assumption of stochastic independence among links. A direct application of the no-external regret property is not sufficient to guarantee small approximation factors.

Guest: Johannes Dams
Host: Yvonne-Anne Pignolet

On the Precision of Social and Information Networks

Reza Bosagh Zadeh, Ashish Goel, Kamesh Munagala, Aneesh Sharma

Abstract: The diffusion of information on online social and information networks has been a popular topic of study in recent years, but attention has typically focused on speed of dissemination and recall (i.e. the fraction of users getting a piece of information). In this paper, we study the complementary notion of the precision of information diffusion. Our model of information dissemination is \broadcast-based”, i.e., one where every message (original or forwarded) from a user goes to a fixed set of recipients, often called the user’s “friends” or “followers”, as in Facebook and Twitter. The precision of the diffusion process is then defined as the fraction of received messages that a user finds interesting.
On first glance, it seems that broadcast-based information diffusion is a “blunt” targeting mechanism, and must necessarily suffer from low precision. Somewhat surprisingly, we present preliminary experimental and analytical evidence to the contrary: it is possible to simultaneously have high precision (i.e. is bounded below by a constant), high recall, and low diameter!
We start by presenting a set of conditions on the structure of user interests, and analytically show the necessity of each of these conditions for obtaining high precision. We also present preliminary experimental evidence from Twitter verifying that these conditions are satisfied. We then prove that the Kronecker-graph based generative model of Leskovec et al. satisfies these conditions given an appropriate and natural definition of user interests. Further, we show that this model also has high precision, high recall, and low diameter. We finally present preliminary experimental evidence showing Twitter has high precision, validating our conclusion. This is perhaps a first step towards a formal understanding of the immense popularity of online social networks as an information dissemination mechanism.

Guest: Aneesh Sharma
Host: Yvonne-Anne Pignolet