Abstract: A fault-tolerant structure for a network is required to continue functioning following the failure of some of the network’s edges or vertices. This paper considers breadth- first search (BFS) spanning trees, and addresses the problem of designing a sparse fault-tolerant BFS tree, or FT-BFS tree for short, namely, a sparse subgraph T of the given network G such that subsequent to the failure of a single edge or vertex, the surviving part T’ of T still contains a BFS spanning tree for (the surviving part of) G. For a source node s, a target node t and an edge e in G, the shortest s-t path P that does not go through e is known as a replacement path. Thus, our FT-BFS tree contains the collection of all replacement paths Ps,t,e for every t in V (G) and every failed edge e in E(G).
Our main results are as follows. We present an algorithm that for every n-vertex graph G and source node s constructs a (single edge failure) FT-BFS tree rooted at s with O(n min(Depth(s), sqrt(n)) edges where Depth(s) is the depth of the BFS tree rooted at s. This result is complemented by a matching lower bound, showing that there exist n-vertex graphs with a source node s for which any edge (or vertex) FT-BFS tree rooted at s has Ω(n3/2) edges. We then consider fault-tolerant multi-source BFS trees, or FT-MBFS trees for short, aiming to provide (following a failure) a BFS tree rooted at each source s in S for some subset of sources. Again, tight bounds are provided, showing that there exists a poly-time algorithm that for every n-vertex graph and source set S of size σ constructs a (single failure) FT-MBFS tree T*(S) from each source si in S, with O(sqrt(σ)n3/2) edges, and on the other hand there exist n-vertex graphs with source sets S of cardinality σ, on which any FT-MBFS tree from S has Ω(sqrt(σ)n3/2) edges.
Finally, we propose an O(log n) approximation algorithm for constructing FT-BFS and FT-MBFS structures. The latter is complemented by a hardness result stating that there exists no (log n) approximation algorithm for these problems under standard complexity assumptions. In comparison with the randomized FT-BFS construction implicit in , our algorithm is deterministic and may improve the number of edges by a factor of up to sqrt(n) for some instances. All our algorithms can be extended to deal with one vertex failure as well, with the same performance.
Guest: Merav Parter
Host: Stefan Schmid
Podcast: Play in new window
Abstract: We study robust and efficient distributed algorithms for searching, storing, and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to guarantee, despite high node churn rate, that a large number of nodes in the network can store, retrieve, and maintain a large number of data items. Our main contributions are fast randomized distributed algorithms that guarantee the above with high probability even under high adversarial churn. In particular, we present the following main results:
1. A randomized distributed search algorithm that with high probability guarantees that searches from as many as n−o(n) nodes (n is the stable network size) succeed in O(logn)- rounds despite O(n/log^(1+δ) n) churn, for any small constant δ >= 0, per round. We assume that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm).
2. A storage and maintenance algorithm that guarantees, with high probability, data items can be efficiently stored (with only Θ(logn) copies of each data item) and maintained in a dynamic P2P network with churn rate up to O(n/log^(1+δ) n) per round. Our search algorithm together with our storage and maintenance algorithm guarantees that as many as n − o(n) nodes can efficiently store, maintain, and search even under O(n/log^(1+δ) n) churn per round. Our algorithms require only polylogarithmic in n bits to be processed and sent (per round) by each node. To the best of our knowledge, our algorithms are the ﬁrst-known, fully-distributed storage and search algorithms that provably work under highly dynamic settings (i.e., high churn rates per step). Furthermore, they are localized (i.e., do
not require any global topological knowledge) and scalable. A technical contribution of this paper, which may be of independent interest, is showing how random walks can be provably used to derive scalable distributed algorithms in dynamic networks with adversarial node churn.
Guests: Gopal Pandurangan and Peter Robinson
Host: Stefan Schmid
Podcast: Play in new window