Show simple item record

dc.contributor.authorDing, Chen
dc.date.accessioned 2017-08-02T22:02:46Z
dc.date.available 2017-08-02T22:02:46Z
dc.date.issued 2000-01-21
dc.identifier.urihttps://hdl.handle.net/1911/96271
dc.descriptionThis work was also published as a Rice University thesis/dissertation: http://hdl.handle.net/1911/19488
dc.description.abstract While CPU speed has been improved by a factor of 6400 over the past twenty years, memory bandwidth has increased by a factor of only 139 during the same period. Consequently, on modern machines the limited data supply simply cannot keep a CPU busy, and applications often utilize only a few percent of peak CPU performance. The hardware solution, which provides layers of high-bandwidth data cache, is not effective for large and complex applications primarily for two reasons: far-separated data reuse and large-stride data access. The first repeats unnecessary transfer and the second communicates useless data. Both waste memory bandwidth. This dissertation pursues a software remedy. It investigates the potential for compiler optimizations to alter program behavior and reduce its memory bandwidth consumption. To this end, this research has studied a two-step transformation strategy: first fuse computations on the same data and then group data used by the same computation. Existing techniques such as loop blocking can be viewed as an application of this strategy within a single loop nest. In order to carry out this strategy to its full extent, this research has developed a set of compiler transformations that perform computation fusion and data grouping over the whole program and during the entire execution. The major new techniques and their unique contributions areMaximal loop fusion: an algorithm that achieves maximal fusion among all program statements and bounded reuse distance within a fused loop. Inter-array data regrouping: the first to selectively group global data structures and to do so with guaranteed profitability and compile-time optimalityLocality grouping and dynamic packing: the first set of compiler-inserted and compiler-optimized computation and data transformations at run time. These optimizations have been implemented in a research compiler and evaluated on real-world applications on SGI Origin2000. The result shows that, on average, the new strategy eliminates 41% of memory loads in regular applications and 63% in irregular and dynamic programs. As a result, the overall execution time is shortened by 12% to 77%. In addition to compiler optimizations, this research has developed a performance model and designed a performance tool. The former allows precise measurement of the memory bandwidth bottleneck; the latter enables effective user tuning and accurate performance prediction for large applications: neither goal was achieved before this thesis.
dc.format.extent 133 pp
dc.language.iso eng
dc.rights You are granted permission for the noncommercial reproduction, distribution, display, and performance of this technical report in any format, but this permission is only for a period of forty-five (45) days from the most recent time that you verified that this technical report is still available from the Computer Science Department of Rice University under terms that include this permission. All other rights are reserved by the author(s).
dc.title Improving Effective Bandwidth through Compiler Enhancement of Global and Dynamic Cache Reuse
dc.type Technical report
dc.date.note January 21, 2000
dc.identifier.digital TR00-352
dc.type.dcmi Text
dc.identifier.citation Ding, Chen. "Improving Effective Bandwidth through Compiler Enhancement of Global and Dynamic Cache Reuse." (2000) https://hdl.handle.net/1911/96271.


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record