Show simple item record

dc.contributor.authorChen, Johnny
Druschel, Peter
Subramanian, Devika
dc.date.accessioned 2017-08-02T22:03:28Z
dc.date.available 2017-08-02T22:03:28Z
dc.date.issued 1997-02-17
dc.identifier.urihttps://hdl.handle.net/1911/96455
dc.description.abstract We investigate two new distributed routing algorithms for data networks based on simple biological "ants" that explore the network and rapidly learn good routes, using a novel variation of reinforcement learning. These two algorithms are fully adaptive to topology changes and changes in link costs in the network, and have space and computational overheads that are competitive with traditional packet routing algorithms: although they can generate more routing traffic when the rate of failures in a network is low, they perform much better under higher failure rates. Both algorithms are more resilient than traditional algorithms, in the sense that random corruption of routing state has limited impact on the computation of paths. We present convergence theorems for both of our algorithms drawing on the theory of non-stationary and stationary discrete-time Markov chains over the reals. We present an extensive empirical evaluation of our algorithms on a simulator that is widely used in the computer networks community for validating and testing protocols. We present comparative results on data delivery performance, aggregate routing traffic (algorithm overhead), as well as the degree of resilience for our new algorithms and two traditional routing algorithms in current use. We also show that the performance of our algorithms scale well with increase in network size using a realistic topology.
dc.format.extent 13 pp
dc.language.iso eng
dc.rights You are granted permission for the noncommercial reproduction, distribution, display, and performance of this technical report in any format, but this permission is only for a period of forty-five (45) days from the most recent time that you verified that this technical report is still available from the Computer Science Department of Rice University under terms that include this permission. All other rights are reserved by the author(s).
dc.title Ants and Reinforcement Learning: A Case Study in Routing in Dynamic Networks
dc.type Technical report
dc.date.note February 17, 1997
dc.identifier.digital TR96-259
dc.type.dcmi Text
dc.identifier.citation Chen, Johnny, Druschel, Peter and Subramanian, Devika. "Ants and Reinforcement Learning: A Case Study in Routing in Dynamic Networks." (1997) https://hdl.handle.net/1911/96455.


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record