Show simple item record

dc.contributor.advisor Cooper, Keith D.
dc.creatorSandoval, Jeffrey Andrew
dc.date.accessioned 2013-03-08T00:38:33Z
dc.date.available 2013-03-08T00:38:33Z
dc.date.issued 2011
dc.identifier.urihttps://hdl.handle.net/1911/70428
dc.description.abstract Computational science demands extreme performance because the running time of an application often determines the size of the experiment that a scientist can reasonably compute. Unfortunately, traditional compiler technology is ill-equipped to harness the full potential of today's computing platforms, forcing scientists to spend time manually tuning their application's performance. Although improving compiler technology should alleviate this problem, two challenges obstruct this goal: hardware platforms are rapidly changing and application software is difficult to statically model and predict. To address these problems, this thesis presents two techniques that aim to improve a compiler's adaptability: automatic resource characterization and selective, dynamic optimization. Resource characterization empirically measures a system's performance-critical characteristics, which can be provided to a parameterized compiler that specializes programs accordingly. Measuring these characteristics is important, because a system's physical characteristics do not always match its observed characteristics. Consequently, resource characterization provides an empirical performance model of a system's actual behavior, which is better suited for guiding compiler optimizations than a purely theoretical model. This thesis presents techniques for determining a system's data cache and TLB capacity, line size, and associativity, as well as instruction-cache capacity. Even with a perfect architectural-model, compilers will still often generate suboptimal code because of the difficulty in statically analyzing and predicting a program's behavior. This thesis presents two techniques that enable selective, dynamic-optimization for cases in which static compilation fails to deliver adequate performance. First, intermediate-representation (IR) annotation generates a fully-optimized native binary tagged with a higher-level compiler representation of itself. The native binary benefits from static optimization and code generation, but the IR annotation allows targeted and aggressive dynamic-optimization. Second, adaptive code-selection allows a program to empirically tune its performance throughout execution by automatically identifying and favoring the best performing variant of a routine. This technique can be used for dynamically choosing between different static-compilation strategies; or, it can be used with IR annotation for performing dynamic, feedback-directed optimization.
dc.format.extent 64 p.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.subjectApplied sciences
High-performance computing
Adaptable compilation
Resource characterization
Computer science
dc.title Foundations for Automatic, Adaptable Compilation
dc.identifier.digital SandovalJ
dc.type.genre Thesis
dc.type.material Text
thesis.degree.department Computer Science
thesis.degree.discipline Engineering
thesis.degree.grantor Rice University
thesis.degree.level Doctoral
thesis.degree.name Doctor of Philosophy
dc.identifier.citation Sandoval, Jeffrey Andrew. "Foundations for Automatic, Adaptable Compilation." (2011) Diss., Rice University. https://hdl.handle.net/1911/70428.


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record