Convergence in (Social) Influence Networks

Silvio Frischknecht, Barbara Keller, and Roger Wattenhofer

Abstract:
We study the convergence of influence networks, where each node changes its state according to the majority of its neighbors. Our main result is a new bound on the convergence time in the synchronous model, solving the classic “Democrats and Republicans” problem. Furthermore, we give a bound for the sequential model in which the sequence of steps is given by an adversary and a bound for the sequential model in which the sequence of steps is given by a benevolent process.

Guest: Barbara Keller

Host: Shantanu Das

On the Precision of Social and Information Networks

Reza Bosagh Zadeh, Ashish Goel, Kamesh Munagala, Aneesh Sharma

Abstract: The diffusion of information on online social and information networks has been a popular topic of study in recent years, but attention has typically focused on speed of dissemination and recall (i.e. the fraction of users getting a piece of information). In this paper, we study the complementary notion of the precision of information diffusion. Our model of information dissemination is \broadcast-based”, i.e., one where every message (original or forwarded) from a user goes to a fixed set of recipients, often called the user’s “friends” or “followers”, as in Facebook and Twitter. The precision of the diffusion process is then defined as the fraction of received messages that a user finds interesting.
On first glance, it seems that broadcast-based information diffusion is a “blunt” targeting mechanism, and must necessarily suffer from low precision. Somewhat surprisingly, we present preliminary experimental and analytical evidence to the contrary: it is possible to simultaneously have high precision (i.e. is bounded below by a constant), high recall, and low diameter!
We start by presenting a set of conditions on the structure of user interests, and analytically show the necessity of each of these conditions for obtaining high precision. We also present preliminary experimental evidence from Twitter verifying that these conditions are satisfied. We then prove that the Kronecker-graph based generative model of Leskovec et al. satisfies these conditions given an appropriate and natural definition of user interests. Further, we show that this model also has high precision, high recall, and low diameter. We finally present preliminary experimental evidence showing Twitter has high precision, validating our conclusion. This is perhaps a first step towards a formal understanding of the immense popularity of online social networks as an information dissemination mechanism.

Guest: Aneesh Sharma
Host: Yvonne-Anne Pignolet

Social Resilience in Online Communities: The Autopsy of Friendster

David Garcia, Pavlin Mavrodiev, Frank Schweitzer

Abstract: We empirically analyze five online communities: Friendster, Livejournal, Facebook, Orkut, and Myspace, to study how social networks decline.  We define social resilience as the ability of a community to withstand changes. We do not argue about the cause of such changes, but concentrate on their impact. Changes may cause users to leave, which may trigger further leaves of others who lost connection to their friends. This may lead to cascades of users leaving. A social network is said to be resilient if the size of such cascades can be limited. To quantify resilience, we use the k-core analysis, to identify subsets of the network in which all users have at least $k$ friends.  These connections generate benefits (b) for each user, which have to outweigh the costs (c) of being a member of the network. If this difference is not positive, users leave.  After all cascades, the remaining network is the k-core of the original network determined by the cost-to-benefit (c/b) ratio.  By analysing the cumulative distribution of k-cores we are able to calculate the number of users remaining in each community. This allows us to infer the impact of the c/b ratio on the resilience of these online communities.  We find that the different online communities have different k-core distributions. Consequently, similar changes in the c/b ratio have a different impact on the amount of active users. Further, our resilience analysis shows that the topology of a social network alone cannot explain its success of failure. As a case study, we focus on the evolution of Friendster. We identify time periods when new users entering the network observed an insufficient c/b ratio. This measure can be seen as a precursor of the later collapse of the community. Our analysis can be applied to estimate the impact of changes in the user interface, which may temporarily increase the c/b ratio, thus posing a threat for the community to shrink, or even to collapse.

Guest: David Garcia
Host: Yvonne-Anne Pignolet

 

Scalable Similarity Estimation in Social Networks: Closeness, Node Labels, and Random Edge Lengths

Edith Cohen, Daniel Delling, Fabian Fuchs, Moises Goldszmidt, Andrew V. Goldberg and Renato F. Werneck

Abstract:

Similarity estimation between nodes based on structural properties of graphs is a basic building block used in the analysis of massive networks for diverse purposes such as link prediction, product rec- ommendations, advertisement, collaborative filtering, and community discovery. While local similarity measures, based on proper- ties of immediate neighbors, are easy to compute, those relying on global properties have better recall. Unfortunately, this better qual- ity comes with a computational price tag. Aiming for both accuracy and scalability, we make several contributions. First, we define closeness similarity, a natural measure that compares two nodes based on the similarity of their relations to all other nodes. Second, we show how the all-distances sketch (ADS) node labels, which are efficient to compute, can support the estimation of closeness similarity and shortest-path (SP) distances in logarithmic query time. Third, we propose the randomized edge lengths (REL) technique and define the corresponding REL distance, which captures both path length and path multiplicity and therefore improves over the SP distance as a similarity measure. The REL distance can also be the basis of closeness similarity and can be estimated using SP computation or the ADS labels. We demonstrate the effectiveness of our measures and the accuracy of our estimates through experiments on social networks with up to tens of millions of nodes.

Guest: Daniel Delling (Microsoft Research)

Host: Chen Avin

On the Performance of Percolation Graph Matching

Lyudmila Yartseva and Matthias Grossglauser

Abstract:

Graph matching is a generalization of the classic graph isomorphism problem. By using only their structures a graph-matching algorithm finds a map between the vertex sets of two similar graphs. This has applications in the de- anonymization of social and information networks and, more generally, in the merging of structural data from different domains.

One class of graph-matching algorithms starts with a known seed set of matched node pairs. Despite the success of these algorithms in practical applications, their performance has been observed to be very sensitive to the size of the seed set. The lack of a rigorous understanding of parameters and performance makes it difficult to design systems and predict their behavior.

In this paper, we propose and analyze a very simple per- colation -based graph matching algorithm that incrementally maps every pair of nodes (i,j) with at least r neighboring mapped pairs. The simplicity of this algorithm makes pos- sible a rigorous analysis that relies on recent advances in bootstrap percolation theory for the G(n, p) random graph. We prove conditions on the model parameters in which per- colation graph matching succeeds, and we establish a phase transition in the size of the seed set. We also confirm through experiments that the performance of percolation graph match- ing is surprisingly good, both for synthetic graphs and real social-network data.

Guest: Lyudmila Yartseva

Host: Chen Avin

The Impact of the Power Law Exponent on the Behavior of a Dynamic Epidemic Type Process

Adrian Ogierman and Robert Elsaesser

Abstract: Epidemic processes are widely used to design efficient distributed algorithms with applications in  various research fields. In this paper, we consider a dynamic epidemic process in a certain (idealistic urban environment modeled by a complete graph. The epidemic is spread among $n$ agents, which move from one node to another according to a power law distribution that describes the so called  attractiveness of the corresponding locations in the urban environment. If two agents meet at some node, then a possible infection may be transmitted from one agent to the other.

We analyze two different scenarios. In the first case we assume that the attractiveness of the nodes follows a power law distribution with some exponent less than $3$, as observed in real world examples. Then, we show that even if each agent may spread the disease for $f(n)$ time steps,  where $f(n) =\smallO(\log n)$ is a (slowly) growing function, at least a small (but still polynomial)  number of agents remains uninfected and the epidemic is stopped after a logarithmic number of  rounds. In the second scenario we assume that the power law exponent increases to some large  constant, which can be seen as an implication of certain countermeasures against the spreading  process.
Then, we show that if each agent is allowed to spread the disease for a constant number of time  steps, the epidemic will only affect a polylogarithmic number of agents and the disease is stopped  after $(\log \log n)^{\bigO(1)}$ steps. Our results explain possible courses of a disease, and point out  why cost-efficient countermeasures may reduce the number of total infections from a high percentage of the population to a negligible fraction.

Guest: Robert Elsaesser
Host: Zvi Lotker

Ultra-Fast Rumor Spreading in Social Networks

Nikolaos Fountoulakis, Konstantinos Panagiotou and Thomas Sauerwald

Abstract:
We analyze the popular push-pull protocol for spreading a rumor in networks. Initially, a single node knows of a rumor. In each succeeding round, every node chooses a random neighbor, and the two nodes share the rumor if one of them is already aware of it. We present the first theoretical analysis of this protocol on random graohs that have a power law degree distribution with an arbitrary exponent β ≥ 2.

Our main findings reveal a striking dichotomy in the performance of the protocol that depends on the exponent of the power law. More specifically, we show that if 2 < β < 3, then the rumor spreads to almost all nodes in Θ(log log n) rounds with high probability. On the other hand, if β > 3, then Ω(log n) rounds are necessary.

We also investigate the asynchronous version of the push-pull protocol, where the nodes do not operate in rounds, but exchange information according to a Poisson process with rate 1. Surprisingly, we are able to show that, if 2 < β < 3, the rumor spreads even in constant time, which is much smaller than the typical distance of two nodes. To the best of our knowledge, this is the first result that establishes a gap between the synchronous and the asynchronous protocol.

Guest: Thomas Sauerwald
Host: Chen Avin