Comparing vector-based and ACT-R memory models using large-scale datasets: User-customized hashtag and tag prediction on Twitter and StackOverflow
Byrne, Michael D
Doctor of Philosophy
The growth of social media and user-created content on online sites provides unique opportunities to study models of declarative memory. The tasks of choosing a hashtag for a tweet and tagging a post on StackOverflow were framed as declarative memory retrieval problems. Two state-of-the-art cognitively-plausible declarative memory models were evaluated on how accurately they predict a user’s chosen tags: an ACT-R based Bayesian model and a random permutation vector-based model. Millions of posts and tweets were collected, and both declarative memory models were used to predict Twitter hashtags and StackOverflow tags. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R’s attentional weight term was linked to a common entropy-weighting natural language processing method used to attenuate low-predictor words. Word order was not found to be strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. Finally, model accuracy was moderate to high for the tasks, which supports the theory that choosing tags on StackOverflow and Twitter is primarily a declarative memory retrieval process. The results of the large-scale exploration show how the architecture of the two memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains.
ACT-R declarative memory theory; vector-based models; LSA; machine learning