The Cost of Radio Network Broadcast for Different Models of Unreliable Links

Mohsen Gaffari, Nancy Lynch, Calvin Newport

Abstract: We study upper and lower bounds for the global and local broadcast problems in the dual graph model combined with different strength adversaries. The dual graph model is a generalization of the standard graph-based radio network model that includes unreliable links controlled by an adversary. It is motivated by the ubiquity of unreliable links in real wireless networks. Existing results in this model assume an offline adaptive adversary – the strongest type of adversary considered in standard randomized analysis. In this paper, we study the two other standard types of adversaries: online adaptive and oblivious. Our goal is to find a model that captures the unpredictable behavior of real networks while still allowing for efficient broadcast solutions.

Guest: Calvin Newport
Host: Yvonne-Anne Pignolet

Structuring Unreliable Radio Networks

Keren Censor-Hillel, Seth Gilbert, Fabian Kuhn, Nancy Lynch and Calvin Newport

Abstract: In this paper we study the problem of building a connected dominating set with constant degree (CCDS) in the dual graph radio network model [5, 10, 11]. This model includes two types of links: reliable, which always deliver messages, and unreliable, which sometimes fail to deliver messages. Real networks compensate for this differing quality by deploying low-layer detection protocols to filter unreliable from reliable links. With this in mind, we begin by presenting an algorithm that solves the CCDS problem in the dual graph model under the assumption that every process u is provided a local link detector set consisting of every neighbor connected to u by a reliable link. A natural follow up question is whether the link detector must be perfectly reliable to solve the CCDS problem. With this in mind, we first describe an algorithm that builds a CCDS in O(∆polylog(n)) time under the assumption of O(1) unreliable links included in each link detector set. We then prove this algorithm to be (almost) tight by showing that the possible inclusion of only a single unreliable link in each process’s local link detector set is sufficient to require Ω(∆) rounds to solve the CCDS problem, regardless of message size. We conclude by discussing how to apply our algorithm in the setting where the topology of reliable and unreliable links can change over time.

Guest: Calvin Newport
Host: Yvonne-Anne Pignolet