Rice Univesrity Logo
    • FAQ
    • Deposit your work
    • Login
    View Item 
    •   Rice Scholarship Home
    • Rice University Graduate Electronic Theses and Dissertations
    • Rice University Electronic Theses and Dissertations
    • View Item
    •   Rice Scholarship Home
    • Rice University Graduate Electronic Theses and Dissertations
    • Rice University Electronic Theses and Dissertations
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Prefetching and buffer management for parallel I/O systems

    Thumbnail
    Name:
    3021145.PDF
    Size:
    5.257Mb
    Format:
    PDF
    View/Open
    Author
    Kallahalla, Mahesh
    Date
    2000
    Advisor
    Varman, Peter J.
    Degree
    Doctor of Philosophy
    Abstract
    In parallel I/O systems the I/O buffer can be used to improve I/O parallelism by improving I/O latency by caching blocks to avoid repeated disk accesses for the same block, and also by buffering prefetched blocks and making the load on disks more uniform. To make best use of available parallelism and locality in I/O accesses, it is necessary to design prefetching and caching algorithms that schedule reads intelligently so that the most useful blocks are prefetched into the buffer and the most valuable blocks are retained in the buffer when the need for evictions arises. This dissertation focuses on algorithms for buffer management in parallel I/O systems. Our aim is to exploit the high parallelism provided by multiple disks to reduce the average read latency seen by an application. The thesis is that traditional greedy strategies fail to exploit I/O parallelism thereby necessitating new algorithms to make use of the available I/O resources. We show that buffer management in parallel I/O systems is fundamentally different from that in systems with a single disk, and develop new algorithms that carefully decide which blocks to prefetch and when, together with which blocks to retain in the buffer. Our emphasis is on designing computationally simple algorithms for optimizing the number of I/Os performed. We consider two classes of I/O access patterns, read-once and read-often, based on the frequency of accesses to the same data. With respect to buffer management for both classes of accesses, we identify fundamental bounds on performance of online algorithms, study the performance of intuitive strategies, and present randomized and deterministic algorithms that guarantee higher performance.
    Keyword
    Electronics; Electrical engineering; Computer science
    Citation
    Kallahalla, Mahesh. "Prefetching and buffer management for parallel I/O systems." (2000) Diss., Rice University. https://hdl.handle.net/1911/17987.
    Metadata
    Show full item record
    Collections
    • Rice University Electronic Theses and Dissertations [13409]

    Home | FAQ | Contact Us | Privacy Notice | Accessibility Statement
    Managed by the Digital Scholarship Services at Fondren Library, Rice University
    Physical Address: 6100 Main Street, Houston, Texas 77005
    Mailing Address: MS-44, P.O.BOX 1892, Houston, Texas 77251-1892
    Site Map

     

    Searching scope

    Browse

    Entire ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsTypeThis CollectionBy Issue DateAuthorsTitlesSubjectsType

    My Account

    Login

    Statistics

    View Usage Statistics

    Home | FAQ | Contact Us | Privacy Notice | Accessibility Statement
    Managed by the Digital Scholarship Services at Fondren Library, Rice University
    Physical Address: 6100 Main Street, Houston, Texas 77005
    Mailing Address: MS-44, P.O.BOX 1892, Houston, Texas 77251-1892
    Site Map