Combining Discrete and Continuous Reasoning for Robot Motion Planning in Complex Domains
Luna, Ryan James
Kavraki, Lydia E
Doctor of Philosophy
Robots are being employed not only for assembly tasks, but also in domains like healthcare, mining, and around the home. As robots become more capable, effective planning becomes critical since optimization-based techniques often fail to perform the kinds of deliberation required for complex systems. This thesis proposes new techniques for solving two instances of motion planning: path planning for high-dimensional manipulators and optimal task planning under uncertainty. Although each problem differs in its applications, the proposed solutions each build upon the notion of mixing discrete and continuous search to better inform the overall planning process. Sampling-based algorithms are known for their ability to quickly compute paths for robotic manipulators. The same algorithms are also notorious for poor quality solution paths, especially as the search space grows. This work demonstrates that, when planning for a robotic manipulator, high-quality paths can be computed in times competitive with the state-of-the-art by guiding a series of points on the robot through a decomposition of the workspace. The proposed algorithm scales to high-dimensional manipulators with dozens of joints by focusing the search to subsets of joints that affect motion in promising portions of the workspace. Simulated experiments on Robonaut2 and planar kinematic chains demonstrate the scalability and quality of the approach. Planning under uncertainty requires computing a policy over the robot's state space, a computationally intense task. Instead of solving a continuous Markov decision process (MDP) directly, this work proposes factoring the MDP} over discrete regions of the space for tractability. Locally optimal policies are constructed within each region to navigate the system to each adjacent region, and the set of regions and local policies are combined to form a bounded-parameter MDP where an optimal policy is computed by selecting one local policy for each region. Significant computational savings are realized by reasoning only over discrete regions, allowing the framework to compute policies that satisfy complex task specifications for an uncertain system. The bounded-parameter MDP can also be patched at runtime to compute policies in uncertain environments. Simulations demonstrate the proposed method computes an optimal policy in orders-of-magnitude faster time compared to existing techniques.
motion planning; planning under uncertainty; sampling-based algorithms