Social Resilience in Online Communities: The Autopsy of Friendster

David Garcia, Pavlin Mavrodiev, Frank Schweitzer

Abstract: We empirically analyze five online communities: Friendster, Livejournal, Facebook, Orkut, and Myspace, to study how social networks decline.  We define social resilience as the ability of a community to withstand changes. We do not argue about the cause of such changes, but concentrate on their impact. Changes may cause users to leave, which may trigger further leaves of others who lost connection to their friends. This may lead to cascades of users leaving. A social network is said to be resilient if the size of such cascades can be limited. To quantify resilience, we use the k-core analysis, to identify subsets of the network in which all users have at least $k$ friends.  These connections generate benefits (b) for each user, which have to outweigh the costs (c) of being a member of the network. If this difference is not positive, users leave.  After all cascades, the remaining network is the k-core of the original network determined by the cost-to-benefit (c/b) ratio.  By analysing the cumulative distribution of k-cores we are able to calculate the number of users remaining in each community. This allows us to infer the impact of the c/b ratio on the resilience of these online communities.  We find that the different online communities have different k-core distributions. Consequently, similar changes in the c/b ratio have a different impact on the amount of active users. Further, our resilience analysis shows that the topology of a social network alone cannot explain its success of failure. As a case study, we focus on the evolution of Friendster. We identify time periods when new users entering the network observed an insufficient c/b ratio. This measure can be seen as a precursor of the later collapse of the community. Our analysis can be applied to estimate the impact of changes in the user interface, which may temporarily increase the c/b ratio, thus posing a threat for the community to shrink, or even to collapse.

Guest: David Garcia
Host: Yvonne-Anne Pignolet

 

Braess’s Paradox in Wireless Networks: The Danger of Improved Technology

Michael Dinitz and Merav Parter

Abstract: When comparing new wireless technologies, it is common to consider the effect that they have on the capacity of the network (defined as the maximum number of simultaneously satisfiable links).  For example, it has been shown that giving receivers the ability to do interference cancellation, or allowing transmitters to use power control, never decreases the capacity and can in certain cases increase it by Omega(log (Delta Pmax)), where Delta is the ratio of the longest link length to the smallest transmitter-receiver distance and Pmax is the maximum transmission power.  But there is no reason to expect the optimal capacity to be realized in practice, particularly since maximizing the capacity is known to be NP-hard.  In reality, we would expect links to behave as self-interested agents, and thus when introducing a new technology it makes more sense to compare the values reached at game-theoretic equilibria than the optimum values.
In this paper we initiate this line of work by comparing various notions of equilibria (particularly Nash equilibria and no-regret behavior) when using a supposedly “better” technology.  We show a version of Braess’s Paradox for all of them: in certain networks, upgrading technology can actually make the equilibria \emph{worse}, despite an increase in the capacity.  We construct instances where this decrease is a constant factor for power control, interference cancellation, and improvements in the SINR threshold beta, and is Omega(log Delta) when power control is combined with interference cancellation.  However, we show that these examples are basically tight: the decrease is at most O(1)for power control, interference cancellation, and improved beta, and is at most
O(log Delta) when power control is combined with interference cancellation.
Guest: Michael Dinitz
Host: Yvonne-Anne Pignolet

The Strong At-Most-Once Problem

Sotirios Kentros, Chadi Kari, Aggelos Kiayias

Abstract: The at-most-once problem in shared memory asks for the completion of a number of tasks by a set of independent processors while adhering to “at most once” semantics. At-most-once algorithms are evaluated in terms of effectiveness, which is a measure that expresses the total number of tasks completed at-most-once in the worst case. Motivated by the lack of deterministic solutions with high effectiveness, we study the feasibility of (a close variant of) this problem. The strong at most once problem is solved by an at-most-one algorithm when all tasks are performed if no participating processes crash during the execution of the algorithm. We prove that the strong at-most-once problem has consensus number 2. This explains, via impossibility, the lack of wait-free deterministic solutions with high effectiveness for the at most once problem using only read/write atomic registers. We then present the first k-adaptive effectiveness optimal randomized solution for the strong at-most-once problem, that has optimal expected work for a non-trivial number of participating processes. Our solution also provides the first k-adaptive randomized solution for the Write-All problem, a dual problem to at-most-once.

Guest: Sotiris Kentros

Host: Yvonne-Anne Pignolet

Randomized Distributed Decision

Pierre Fraigniaud, Amos Korman, Merav Parter, David Peleg

Abstract: The paper tackles the power of randomization in the context of locality by analyzing the ability to`boost’ the success probability of deciding a distributed language. The main outcome of this analysis is that the distributed computing setting contrasts significantly with the sequential one as far as randomization is concerned. Indeed, we prove that in some cases, the ability to increase the success probability for deciding distributed languages is rather limited. Informally, a (p,q)-decider for a language L is a distributed randomized algorithm which accepts instances in L with probability at least p and rejects instances outside of L with probability at least q. It is known that every hereditary language that can be decided in t rounds by a (p,q)-decider, where p^2+q>1, can actually be decided deterministically in O(t) rounds. In one of our results we give evidence supporting the conjecture that the above statement holds for all distributed languages. This is achieved by considering the restricted case of path topologies. We then turn our attention to the range below the aforementioned threshold, namely, the case where p^2+q\leq1. We define B_k(t) to be the set of all languages decidable in at most t rounds by a (p,q)-decider, where p^{1+1/k}+q>1. It is easy to see that every language is decidable (in zero rounds) by a (p,q)-decider satisfying p+q=1. Hence, the hierarchy B_k provides a spectrum of complexity classes between determinism and complete randomization. We prove that all these classes are separated: for every integer k\geq 1, there exists a language L satisfying L\in B_{k+1}(0) but L\notin B_k(t) for any t=o(n). In addition, we show that B_\infty(t) does not contain all languages, for any t=o(n). Finally, we show that if the inputs can be restricted in certain ways, then the ability to boost the success probability becomes almost null.

Guest: Merav Parter

Host: Yvonne-Anne Pignolet

 

The Impact of the Power Law Exponent on the Behavior of a Dynamic Epidemic Type Process

Adrian Ogierman and Robert Elsaesser

Abstract: Epidemic processes are widely used to design efficient distributed algorithms with applications in  various research fields. In this paper, we consider a dynamic epidemic process in a certain (idealistic urban environment modeled by a complete graph. The epidemic is spread among $n$ agents, which move from one node to another according to a power law distribution that describes the so called  attractiveness of the corresponding locations in the urban environment. If two agents meet at some node, then a possible infection may be transmitted from one agent to the other.

We analyze two different scenarios. In the first case we assume that the attractiveness of the nodes follows a power law distribution with some exponent less than $3$, as observed in real world examples. Then, we show that even if each agent may spread the disease for $f(n)$ time steps,  where $f(n) =\smallO(\log n)$ is a (slowly) growing function, at least a small (but still polynomial)  number of agents remains uninfected and the epidemic is stopped after a logarithmic number of  rounds. In the second scenario we assume that the power law exponent increases to some large  constant, which can be seen as an implication of certain countermeasures against the spreading  process.
Then, we show that if each agent is allowed to spread the disease for a constant number of time  steps, the epidemic will only affect a polylogarithmic number of agents and the disease is stopped  after $(\log \log n)^{\bigO(1)}$ steps. Our results explain possible courses of a disease, and point out  why cost-efficient countermeasures may reduce the number of total infections from a high percentage of the population to a negligible fraction.

Guest: Robert Elsaesser
Host: Zvi Lotker

Faster Randomized Consensus with an Oblivious Adversary

James Aspnes

Abstract: Two new algorithms are given for randomized consensus in a shared-memory model with an oblivious adversary. Each is based on a new construction of a conciliator, an object that guarantees termination and validity, but that only guarantees agreement with constant probability. The first conciliator assumes unit-cost snapshots and achieves agreement among n processes with probability 1-ε in O(log* n + log(1/ε)) steps for each process. The second uses ordinary multi-writer registers, and achieves agreement with probability 1-ε in O(log log n + log(1/ε)) steps. Combining these constructions with known results gives randomized consensus for arbitrarily many possible input values using unit-cost snapshots in O(log* n) expected steps and randomized consensus for up to O(log n log log n) possible input values using ordinary registers in O(log log n) expected steps.

Guest: James Aspnes
Host: Zvi Lotker

Leader Election in Shared Spectrum Radio Networks

Sebastian Daum, Seth Gilbert, Fabian Kuhn, and Calvin Newport

Abstract: In this paper, we study the {\em leader election problem} in the context of a congested single-hop radio network.  We assume a collection of $N$ synchronous devices with access to a shared band of the radio spectrum, divided into $\freq$ frequencies. To model unpredictable congestion, we assume an abstract {\em interference adversary} that can choose up to $t < \freq$ frequencies in each round to disrupt, preventing communication.  The devices are individually activated in arbitrary rounds by an adversary. On activation, a device does not know how many other devices (if any) are also active.  The goal of the leader election problem is for each active device to output the id of a leader as soon as possible after activation, while preserving the safety constraint that all devices output the {\em same} leader, with high probability.

We begin by establishing a lower bound of   $\Omega\big( \frac{ \log^2{N} }{ (\freq -t)\log{\log{N}}}    + \frac{ \freq t }{ \freq – t} \cdot \log{N}\big)$ rounds,   through reduction to an existing result in this model.  We then set out to prove this bound tight  (within $\log\log{N}$ factors). For the case where $t=0$, we present a novel randomized   algorithm, based on a strategy of recruiting {\em herald nodes},   that works in $O\big(\frac{\log^2{N}}{\freq}+\log N\big)$   time. For $1 \leq t \leq \freq/6$, we present a variant of our herald algorithm in which multiple real (potentially disrupted) frequencies are used to simulate each non-disrupted frequency from the $t=0$ case.  This algorithm works in $O\big(\frac{\log^2{N}}{\freq} + t\log{N}\big)$ time.  Finally, for $t > \freq/6$ we show how to improve the {\em trapdoor protocol} of~\cite{podc09}, used to solve a similar problem in a non-optimal manner, to solve leader election in optimal $O\big( \frac{\log{N} + \freq t}{\freq-t} \cdot \log{N} \big)$ time, for (only) these large values of $t$. We also observe that if $\freq=\omega(1)$ and $t \leq (1-\eps)\freq$ for a constant $\eps>0$, our protocols beat the classic $\Omega(\log^{2}{N})$ bound on wake-up in a single frequency radio network, underscoring the observation that more frequencies in a radio network allows for more algorithmic efficiency—even if devices can each only participate on a single frequency at a time, and a significant fraction of these frequencies are disrupted adversarially.

Guest: Sebastian Daum
Host: Yvonne-Anne Pignolet

Approximation Algorithms for Stochastic Orienteering

Anupam Gupta Ravishankar Krishnaswamy Viswanath Nagarajan R. Ravi

Abstract: In the Stochastic Orienteering problem, we are given a metric, where each node also has a job located there with some deterministic reward and a random size. (Think of the jobs as being chores one needs to run, and the sizes as the amount of time it takes to do the chore.) The goal is to adaptively decide which nodes to visit to maximize total expected reward, subject to the constraint that the total distance traveled plus the  total size of jobs processed is at most a given budget of B. (I.e., we get reward for all those chores we finish by the end of the day). The (random) size of a job is not known until it is completely processed. Hence the problem combines aspects of both the stochastic
knapsack problem with uncertain item sizes and the deterministic orienteering problem of using a limited travel time to maximize gathered rewards located at nodes.

In this paper, we present a constant-factor ap-approximation algorithm for the best non-adaptive policy for the Stochastic Orienteering problem. We also show a small adaptivity gap, i.e., the existence of a non-adaptive policy whose reward is at least an ­ fraction of the optimal expected reward, and hence we also get an approximation algorithm for the adaptive problem.

Finally we address the case when the node rewards are also random and could be correlated with the waiting time, and give a non-adaptive policy which is an -approximation to the best adaptive policy on n-node metrics with budget B.

Guest: Ravishankar Krishnaswamy
Host: Zvi Lotker

The Notion of a Rational Convex Program, and an Algorithm for the Arrow-Debreu Nash Bargaining Game

Vijay Vazirani

Abstract: We introduce the notion of a rational convex program (RCP) and we classify the known RCPs into two classes: quadratic and logarithmic. The importance of rationality is that it opens up the possibility of computing an optimal solution to the program via an algorithm that is either combinatorial or uses an LP-oracle. Next we define a new Nash bargaining game, called ADNB, which is derived from the linear case of the Arrow-Debreu market model. We show that the convex program for ADNB is a logarithmic RCP, but unlike other known members of this class, it is non-total.
Our main result is a combinatorial, polynomial time algorithm for ADNB. It turns out that the reason for infeasibility of logarithmic RCPs is quite different from that for LPs and quadratic RCPs. We believe that our ideas for surmounting the new difficulties will be useful for dealing with other non-total RCPs as well. We give an application of our combinatorial algorithm for ADNB to an important “fair” throughput allocation problem on a wireless channel. Finally, we present a number of interesting questions that the new notion of RCP raises.

 

Guest: Vijay Vazirani
Host: Zvi Lotker

On the communication and streaming complexity of maximum bipartite matching

Ashish Goel, Michael Kapralov and Sanjeev Khanna

Abstract Consider the following communication problem. Alice holds a graph and Bob holds a graph , where |P|=|Q|=n. Alice is allowed to send Bob a message m that depends only on the graph GA. Bob must then output a matching . What is the minimum message size of the message m that Alice sends to Bob that allows Bob to recover a matching of size at least times the maximum matching in ? The minimum message length is the one-round communication complexity of approximating bipartite matching. It is easy to see that the one-round communication complexity also gives a lower bound on the space needed by a one-pass streaming algorithm to compute a approximate bipartite matching. The focus of this work is to understand one-round communication complexity and one-pass streaming complexity of maximum bipartite matching. In particular, how well can one approximate these problems with linear communication and space? Prior to our work, only a 1/2-approximation was known for both these problems.
We show that for any  , a -approximation requires a communication complexity of .
We also consider the natural restrictingon of the problem in which and are only allowed to share vertices on one side of the bipartition, which is motivated by applications to one-pass streaming with vertex arrivals. We show that a 3/4-approximation can be achieved with a linear size message in this case, and this result is best possible in that super-linear space is needed to achieve any better approximation.
Finally, we build on our techniques for the restricted version above to design one-pass streaming algorithm for the case when vertices on one side are known in advance, and the vertices on the other side arrive in a streaming manner together with all their incident edges. This is precisely the setting of the celebrated (1-1/e)-competitive randomized algorithm of Karp-Vazirani-Vazirani (KVV) for the online bipartite matching problem. We present here the first deterministic one-pass streaming (1-1/e)-approximation algorithm using O(n) space for this setting.

Guest: Michael Kaparalov
Host: Yvonne-Anne Pignolet