Optimal Schedules for Parallelizing Anytime Algorithms
Optimal Schedules for Parallelizing Anytime Algorithms.
In Proceedings of The AAAI Fall Symposium on Using Uncertainty within Computation, 49--56, 2001
Online Version
A pdf version is available for download.
Abstract
The performance of anytime algorithms having a nondeterministic nature can be improved by solving simultaneously several instances of the algorithm-problem pairs. These pairs may include different instances of a problem (like starting from a different initial state), different algorithms (if several alternatives exist), or several instances of the same algorithm (for non-deterministic algorithms). A straightforward parallelization, however, usually results in only a linear speedup, while more effective parallelization schemes require knowledge about the problem space and/or the algorithm itself. In this paper we present a general framework for parallelization, which uses only minimal information on the algorithm (namely, its probabilistic behavior, described by a performance profile), and obtains a super-linear speedup by optimal scheduling of different instances of the algorithm-problem pairs. We show a mathematical model for this framework, present algorithms for optimal scheduling, and demonstrate the behavior of optimal schedules for different kinds of anytime algorithms.
Keywords
Co-authors
Bibtex Entry
@inproceedings{FinkelsteinMR01i,
title = {Optimal Schedules for Parallelizing Anytime Algorithms},
author = {Lev Finkelstein and Shaul Markovitch and Ehud Rivlin},
year = {2001},
booktitle = {Proceedings of The AAAI Fall Symposium on Using Uncertainty within Computation},
pages = {49--56},
keywords = {Scheduling, Resource-Bounded Reasoning, Multi-Agent Systems},
abstract = {The performance of anytime algorithms having a nondeterministic nature can be improved by solving simultaneously several instances of the algorithm-problem pairs. These pairs may include different instances of a problem (like starting from a different initial state), different algorithms (if several alternatives exist), or several instances of the same algorithm (for non-deterministic algorithms). A straightforward parallelization, however, usually results in only a linear speedup, while more effective parallelization schemes require knowledge about the problem space and/or the algorithm itself. In this paper we present a general framework for parallelization, which uses only minimal information on the algorithm (namely, its probabilistic behavior, described by a performance profile), and obtains a super-linear speedup by optimal scheduling of different instances of the algorithm-problem pairs. We show a mathematical model for this framework, present algorithms for optimal scheduling, and demonstrate the behavior of optimal schedules for different kinds of anytime algorithms.}
}