Abstract: Dynamic Bin Packing (DBP) is a variant of classical bin packing, which assumes that items may arrive and depart at arbitrary times. Existing works on DBP generally aim to minimize the maximum number of bins ever used in the packing. In this paper, we consider a new version of the DBP problem, namely, the MinTotal DBP problem which targets at minimizing the total cost of the bins used over time. It is motivated by the request dispatching problem arising in cloud gaming systems. We analyze the competitive ratios of the commonly used First Fit, Best Fit, and Any Fit packing (the family of packing algorithms that open a new bin only when no currently opened bin can accommodate the item to be packed) algorithms for the MinTotal DBP problem. We show that the competitive ratio of Any Fit packing cannot be better than the max/min item interval length ratio μ. The competitive ratio of Best Fit packing is not bounded for any given μ. For the general case, First Fit packing has a competitive ratio of 2μ + 13. We also propose a Modified First Fit packing algorithm that can achieve a competitive ratio of 8⁄7μ + 55⁄7 when μ is not known and can achieve a competitive ratio of μ + 8 when μ is known. Guest: Yusen Li, Nanyang Technological University, Singapore Host: Yvonne-Anne Pignolet
Abstract: We consider the problem of scheduling a number of jobs on m identical processors sharing a continuously divisible resource. Each job jcomes with a resource requirement rj∈[0,1]. The job can be processed at full speed if granted its full resource requirement. If receiving only an x-portion of r_j, it is processed at an x-fraction of the full speed. Our goal is to find a resource assignment that minimizes the makespan (i.e., the latest completion time). Variants of such problems, relating the resource assignment of jobs to their processing speeds, have been studied under the term discrete-continuous scheduling. Known results are either very pessimistic or heuristic in nature.
In this paper, we suggest and analyze a slightly simplified model. It focuses on the assignment of shared continuous resources to the processors. The job assignment to processors and the ordering of the jobs have already been fixed. It is shown that, even for unit size jobs, finding an optimal solution is NP-hard if the number of processors is part of the input. Positive results for unit size jobs include an efficient optimal algorithm for 2 processors. Moreover, we prove that balanced schedules yield a 2-1/m-approximation for a fixed number of processors. Such schedules are computed by our GreedyBalance algorithm, for which the bound is tight.
Guest: Sören Riechers, Universität Paderborn, https://www.hni.uni-paderborn.de/alg/mitarbeiter/soerenri/
Host: Yvonne-Anne Pignolet
The analysis of several algorithms and data structures can be framed as a peeling
process on a random hypergraph: vertices with degree less than k are removed
until there are no vertices of degree less than k left. The remaining hypergraph
is known as the k-core. In this paper, we analyze parallel peeling processes,
where in each round, all vertices of degree less than k are removed. It is
known that, below a specific edge density threshold, the k-core is empty
with high probability. We show that, with high probability, below this threshold,
only 1 / log((k-1)(r-1)) log log n + O(1) rounds of peeling are needed to obtain
the empty k-core for r-uniform hypergraphs. Interestingly, we show
that above this threshold, Ω(log n) rounds of peeling are required to find the non-empty
k-core. Since most algorithms and data structures aim to peel to an empty k-core,
this asymmetry appears fortunate. We verify the theoretical results both with simulation
and with a parallel implementation using graphics processing units (GPUs). Our
implementation provides insights into how to structure parallel peeling algorithms
for efficiency in practice.
With double hashing, for an item x, one generates two hash values f(x) and g(x),
and then uses combinations (f(x) + ig(x)) mod n for i = 0, 1, 2, … to generate
multiple hash values from the initial two. We show that the performance difference between double hashing and fully random hashing appears negligible in the standard balanced allocation paradigm, where each item is placed in the least loaded of
d choices, as well as several related variants. We perform an empirical study,
and consider multiple theoretical approaches. While several techniques can be used to
show asymptotic results for the maximum load, we demonstrate how fluid limit methods
explain why the behavior of double hashing and fully random hashing are essentially
indistinguishable in this context.