A Super-Fast Distributed Algorithm for Bipartite Metric Facility Location

James Hegeman and Sriram Pemmaraju

Abstract:  The facility location problem consists of a set of facilities F, a set of clients C, an opening cost f_i associated with each facility x_i, and a connection cost D(x_i,y_j) between each facility x_i and client y_j. The goal is to find a subset of facilities to open, and to connect each client to an open facility, so as to minimize the total facility opening costs plus connection costs. This paper presents the first expected-sub-logarithmic-round distributed O(1)-approximation algorithm in the CONGEST model for the metric facility location problem on the complete bipartite network with parts F and C. Our algorithm has an expected running time of O((log log n)^3) rounds, where n = |F| + |C|. This result can be viewed as a continuation of our recent work (ICALP 2012) in which we presented the first sub-logarithmic-round distributed O(1)-approximation algorithm for metric facility location on a clique network. The bipartite setting presents several new challenges not present in the problem on a clique network. We present two new techniques to overcome these challenges. (i) In order to deal with the problem of not being able to choose appropriate probabilities (due to lack of adequate knowledge), we design an algorithm that performs a random walk over a probability space and analyze the progress our algorithm makes as the random walk proceeds. (ii) In order to deal with a problem of quickly disseminating a collection of messages, possibly containing many duplicates, over the bipartite network, we design a probabilistic hashing scheme that delivers all of the messages in expected-O(log log n) rounds.

Guest: James Hegeman
Host: Yvonne-Anne Pignolet

Sleeping Experts in Wireless Networks

Johannes Dams, Martin Hoefer and Thomas Kesselheim

Abstract: We consider capacity maximization algorithms for wireless networks with changing availabilities of spectrum. There are n sender-receiver pairs (called links) and k channels. We consider an iterative round-based scenario, where in each round the set of channels available to each link changes. Each link independently decides about access to one available channel in order to implement a successful transmission. Transmissions are subject to interference and noise, and we use a general approach based on affectance to define which attempts are successful. This includes recently popular interference models based on SINR.
Our main result is that efficient distributed algorithms from sleeping-expert regret learning can be used to obtain constant-factor approximations if channel availability is stochastic and independently distributed among links. In general, sublinear approximation factors cannot be obtained without the assumption of stochastic independence among links. A direct application of the no-external regret property is not sufficient to guarantee small approximation factors.

Guest: Johannes Dams
Host: Yvonne-Anne Pignolet

On the Precision of Social and Information Networks

Reza Bosagh Zadeh, Ashish Goel, Kamesh Munagala, Aneesh Sharma

Abstract: The diffusion of information on online social and information networks has been a popular topic of study in recent years, but attention has typically focused on speed of dissemination and recall (i.e. the fraction of users getting a piece of information). In this paper, we study the complementary notion of the precision of information diffusion. Our model of information dissemination is \broadcast-based”, i.e., one where every message (original or forwarded) from a user goes to a fixed set of recipients, often called the user’s “friends” or “followers”, as in Facebook and Twitter. The precision of the diffusion process is then defined as the fraction of received messages that a user finds interesting.
On first glance, it seems that broadcast-based information diffusion is a “blunt” targeting mechanism, and must necessarily suffer from low precision. Somewhat surprisingly, we present preliminary experimental and analytical evidence to the contrary: it is possible to simultaneously have high precision (i.e. is bounded below by a constant), high recall, and low diameter!
We start by presenting a set of conditions on the structure of user interests, and analytically show the necessity of each of these conditions for obtaining high precision. We also present preliminary experimental evidence from Twitter verifying that these conditions are satisfied. We then prove that the Kronecker-graph based generative model of Leskovec et al. satisfies these conditions given an appropriate and natural definition of user interests. Further, we show that this model also has high precision, high recall, and low diameter. We finally present preliminary experimental evidence showing Twitter has high precision, validating our conclusion. This is perhaps a first step towards a formal understanding of the immense popularity of online social networks as an information dissemination mechanism.

Guest: Aneesh Sharma
Host: Yvonne-Anne Pignolet

Social Resilience in Online Communities: The Autopsy of Friendster

David Garcia, Pavlin Mavrodiev, Frank Schweitzer

Abstract: We empirically analyze five online communities: Friendster, Livejournal, Facebook, Orkut, and Myspace, to study how social networks decline.  We define social resilience as the ability of a community to withstand changes. We do not argue about the cause of such changes, but concentrate on their impact. Changes may cause users to leave, which may trigger further leaves of others who lost connection to their friends. This may lead to cascades of users leaving. A social network is said to be resilient if the size of such cascades can be limited. To quantify resilience, we use the k-core analysis, to identify subsets of the network in which all users have at least $k$ friends.  These connections generate benefits (b) for each user, which have to outweigh the costs (c) of being a member of the network. If this difference is not positive, users leave.  After all cascades, the remaining network is the k-core of the original network determined by the cost-to-benefit (c/b) ratio.  By analysing the cumulative distribution of k-cores we are able to calculate the number of users remaining in each community. This allows us to infer the impact of the c/b ratio on the resilience of these online communities.  We find that the different online communities have different k-core distributions. Consequently, similar changes in the c/b ratio have a different impact on the amount of active users. Further, our resilience analysis shows that the topology of a social network alone cannot explain its success of failure. As a case study, we focus on the evolution of Friendster. We identify time periods when new users entering the network observed an insufficient c/b ratio. This measure can be seen as a precursor of the later collapse of the community. Our analysis can be applied to estimate the impact of changes in the user interface, which may temporarily increase the c/b ratio, thus posing a threat for the community to shrink, or even to collapse.

Guest: David Garcia
Host: Yvonne-Anne Pignolet

 

Braess’s Paradox in Wireless Networks: The Danger of Improved Technology

Michael Dinitz and Merav Parter

Abstract: When comparing new wireless technologies, it is common to consider the effect that they have on the capacity of the network (defined as the maximum number of simultaneously satisfiable links).  For example, it has been shown that giving receivers the ability to do interference cancellation, or allowing transmitters to use power control, never decreases the capacity and can in certain cases increase it by Omega(log (Delta Pmax)), where Delta is the ratio of the longest link length to the smallest transmitter-receiver distance and Pmax is the maximum transmission power.  But there is no reason to expect the optimal capacity to be realized in practice, particularly since maximizing the capacity is known to be NP-hard.  In reality, we would expect links to behave as self-interested agents, and thus when introducing a new technology it makes more sense to compare the values reached at game-theoretic equilibria than the optimum values.
In this paper we initiate this line of work by comparing various notions of equilibria (particularly Nash equilibria and no-regret behavior) when using a supposedly “better” technology.  We show a version of Braess’s Paradox for all of them: in certain networks, upgrading technology can actually make the equilibria \emph{worse}, despite an increase in the capacity.  We construct instances where this decrease is a constant factor for power control, interference cancellation, and improvements in the SINR threshold beta, and is Omega(log Delta) when power control is combined with interference cancellation.  However, we show that these examples are basically tight: the decrease is at most O(1)for power control, interference cancellation, and improved beta, and is at most
O(log Delta) when power control is combined with interference cancellation.
Guest: Michael Dinitz
Host: Yvonne-Anne Pignolet

Distributed Minimum Cut Approximation

Abstract:

We study the problem of computing approximate minimum edge cuts by distributed algorithms. We present two randomized approximation algorithms that both run in a standard synchronous message passing model where in each round, $O(log n)$ bits can be transmitted over every edge (a.k.a. the CONGEST model). The first algorithm is based on a simple and new approach for analyzing random edge sampling, which we call random layering technique. For any any weighted graph and any $\epsilon \in (0, 1)$, the algorithm finds a cut of size at most $O(\epsilon^{-1}\lambda)$ in $O(D) + \tilde{O}(n^{1/2 + \epsilon})$ rounds, where $\lambda$ is the minimum-cut size and the $\tilde{O}$-notation hides poly-logarithmic factors in $n$. In addition, using the outline of a centralized algorithm due to Matula [SODA '93], we present a randomized algorithm to compute a cut of size at most $(2+\epsilon)\lambda$ in $\tilde{O}((D+\sqrt{n})/\epsilon^5)$ rounds for any $\epsilon>0$. The time complexities of our algorithms almost match the $\tilde{\Omega}(D + \sqrt{n})$ lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACT-News '04] and Das Sarma et al. [STOC '11].
To complement our upper bound results, we also strengthen the $\tilde{\Omega}(D + \sqrt{n})$ lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which $O(w\log n)$ bits can be transmitted in each round over an edge of weight $w$). For unweighted simple graphs, we show that computing an $\alpha$-approximate minimum cut requires time at least $\tilde{\Omega}(D + \sqrt{n}/\alpha^{1/4})$.

Guest: Mohsen Ghaffari (Electrical Engineering and Computer Science Department, Massachusetts Institute of Technology)

Host: Merav Parter

New Challenges and Algebraic Topology

Maurice Herlihy

Maurice Herlihy discusses his view on the challenges distributed computing faces in the future, describes some of his work on programming abstractions and how he uses algebraic topology as a tool to reason about distributed protocols.

Guest: Maurice Herlihy
Host: Zvi Lotker and Yvonne-Anne Pignolet