On the Precision of Social and Information Networks

Reza Bosagh Zadeh, Ashish Goel, Kamesh Munagala, Aneesh Sharma

Abstract: The diffusion of information on online social and information networks has been a popular topic of study in recent years, but attention has typically focused on speed of dissemination and recall (i.e. the fraction of users getting a piece of information). In this paper, we study the complementary notion of the precision of information diffusion. Our model of information dissemination is \broadcast-based”, i.e., one where every message (original or forwarded) from a user goes to a fixed set of recipients, often called the user’s “friends” or “followers”, as in Facebook and Twitter. The precision of the diffusion process is then defined as the fraction of received messages that a user finds interesting.
On first glance, it seems that broadcast-based information diffusion is a “blunt” targeting mechanism, and must necessarily suffer from low precision. Somewhat surprisingly, we present preliminary experimental and analytical evidence to the contrary: it is possible to simultaneously have high precision (i.e. is bounded below by a constant), high recall, and low diameter!
We start by presenting a set of conditions on the structure of user interests, and analytically show the necessity of each of these conditions for obtaining high precision. We also present preliminary experimental evidence from Twitter verifying that these conditions are satisfied. We then prove that the Kronecker-graph based generative model of Leskovec et al. satisfies these conditions given an appropriate and natural definition of user interests. Further, we show that this model also has high precision, high recall, and low diameter. We finally present preliminary experimental evidence showing Twitter has high precision, validating our conclusion. This is perhaps a first step towards a formal understanding of the immense popularity of online social networks as an information dissemination mechanism.

Guest: Aneesh Sharma
Host: Yvonne-Anne Pignolet

Social Resilience in Online Communities: The Autopsy of Friendster

David Garcia, Pavlin Mavrodiev, Frank Schweitzer

Abstract: We empirically analyze five online communities: Friendster, Livejournal, Facebook, Orkut, and Myspace, to study how social networks decline.  We define social resilience as the ability of a community to withstand changes. We do not argue about the cause of such changes, but concentrate on their impact. Changes may cause users to leave, which may trigger further leaves of others who lost connection to their friends. This may lead to cascades of users leaving. A social network is said to be resilient if the size of such cascades can be limited. To quantify resilience, we use the k-core analysis, to identify subsets of the network in which all users have at least $k$ friends.  These connections generate benefits (b) for each user, which have to outweigh the costs (c) of being a member of the network. If this difference is not positive, users leave.  After all cascades, the remaining network is the k-core of the original network determined by the cost-to-benefit (c/b) ratio.  By analysing the cumulative distribution of k-cores we are able to calculate the number of users remaining in each community. This allows us to infer the impact of the c/b ratio on the resilience of these online communities.  We find that the different online communities have different k-core distributions. Consequently, similar changes in the c/b ratio have a different impact on the amount of active users. Further, our resilience analysis shows that the topology of a social network alone cannot explain its success of failure. As a case study, we focus on the evolution of Friendster. We identify time periods when new users entering the network observed an insufficient c/b ratio. This measure can be seen as a precursor of the later collapse of the community. Our analysis can be applied to estimate the impact of changes in the user interface, which may temporarily increase the c/b ratio, thus posing a threat for the community to shrink, or even to collapse.

Guest: David Garcia
Host: Yvonne-Anne Pignolet


Call Me MayBe: Understanding Nature and Risks of Sharing Mobile Numbers on Online Social Networks

Prachi Jain, Paridhi Jain and Ponnurangam Kumaraguru

Little research explores the activity of sharing mobile num-
bers on OSNs, in particular via public posts. In this work,
we understand the characteristics and risks of mobile num-
bers shared on OSNs either via profile or public posts and
focus on Indian mobile numbers. We collected 76,347 unique
mobile numbers posted by 85,905 users on Twitter and Face-
book and analyzed 2,997 numbers, prefixed with +91. We
observed that most users shared their own mobile numbers
to spread urgent information and to market products, IT
facilities and escort business. Users resorted to applications
like Twitterfeed and TweetDeck to post and popularize mo-
bile numbers on multiple OSNs. To assess risks associated
with mobile numbers exposed on OSNs, we used mobile
numbers to gain sensitive information (e.g. name, Voter
ID) about their owners. We communicated the observed
risks to the owners by calling them on their mobile num-
ber. Few users were surprised to know the online presence
of their number, while few users intentionally put it online
for business purposes. With these observations, we highlight
that there is a need to monitor leakage of mobile numbers
via profile and public posts. To the best of our knowledge,
this is the first exploratory study to critically investigate the
exposure of mobile numbers on OSNs.

Guest: Prof Ponnurangam Kumaraguru

Host: Prof Zvi Lotker, Ben-Gurion University.

Inferring User Interests from Tweet Times

Dinesh Ramasamy, Sriram Venkateswaran and Upamanyu Madhow


We propose and demonstrate the feasibility of a probabilistic framework for mining user interests from their tweet times alone, by exploiting the known timing of external events associated with these interests. This approach allows for making inferences on the interests of a large number of users for which text-based mining may become cumbersome, and also sidesteps the difficult problem of semantic/contextual analysis required for such text-based inferences. The statistic that we propose for gauging the user’s interest level is the probability that he/she tweets more frequently at certain times when this topic is in the “public eye” than at other times. We report on promising experimental results using Twitter data on detecting whether or not a user is a fan of a given baseball team, leveraging the known timing of games played by the team. Since people often interact with others who share similar interests, we extend our probabilistic frame- work to use the interest level estimates for other users with whom a person interacts (by referring to them in his/her tweets). We demonstrate that it is possible to significantly improve the detection probability (for a given false alarm rate) by such information pooling on the social graph.

Guest: Dinesh Ramasamy (UCSB)

Host:  Chen Avin

Scalable Similarity Estimation in Social Networks: Closeness, Node Labels, and Random Edge Lengths

Edith Cohen, Daniel Delling, Fabian Fuchs, Moises Goldszmidt, Andrew V. Goldberg and Renato F. Werneck


Similarity estimation between nodes based on structural properties of graphs is a basic building block used in the analysis of massive networks for diverse purposes such as link prediction, product rec- ommendations, advertisement, collaborative filtering, and community discovery. While local similarity measures, based on proper- ties of immediate neighbors, are easy to compute, those relying on global properties have better recall. Unfortunately, this better qual- ity comes with a computational price tag. Aiming for both accuracy and scalability, we make several contributions. First, we define closeness similarity, a natural measure that compares two nodes based on the similarity of their relations to all other nodes. Second, we show how the all-distances sketch (ADS) node labels, which are efficient to compute, can support the estimation of closeness similarity and shortest-path (SP) distances in logarithmic query time. Third, we propose the randomized edge lengths (REL) technique and define the corresponding REL distance, which captures both path length and path multiplicity and therefore improves over the SP distance as a similarity measure. The REL distance can also be the basis of closeness similarity and can be estimated using SP computation or the ADS labels. We demonstrate the effectiveness of our measures and the accuracy of our estimates through experiments on social networks with up to tens of millions of nodes.

Guest: Daniel Delling (Microsoft Research)

Host: Chen Avin

On the Performance of Percolation Graph Matching

Lyudmila Yartseva and Matthias Grossglauser


Graph matching is a generalization of the classic graph isomorphism problem. By using only their structures a graph-matching algorithm finds a map between the vertex sets of two similar graphs. This has applications in the de- anonymization of social and information networks and, more generally, in the merging of structural data from different domains.

One class of graph-matching algorithms starts with a known seed set of matched node pairs. Despite the success of these algorithms in practical applications, their performance has been observed to be very sensitive to the size of the seed set. The lack of a rigorous understanding of parameters and performance makes it difficult to design systems and predict their behavior.

In this paper, we propose and analyze a very simple per- colation -based graph matching algorithm that incrementally maps every pair of nodes (i,j) with at least r neighboring mapped pairs. The simplicity of this algorithm makes pos- sible a rigorous analysis that relies on recent advances in bootstrap percolation theory for the G(n, p) random graph. We prove conditions on the model parameters in which per- colation graph matching succeeds, and we establish a phase transition in the size of the seed set. We also confirm through experiments that the performance of percolation graph match- ing is surprisingly good, both for synthetic graphs and real social-network data.

Guest: Lyudmila Yartseva

Host: Chen Avin