Macquarie University
Browse

Words paint a thousand pictures: evaluating topic models using FOMC transcripts

Download (5.42 MB)
thesis
posted on 2023-01-25, 02:42 authored by Luke CayananLuke Cayanan

At its core, a topic model’s primary task is to uncover patterns from a huge collection of unstructured data in an automated way. Fast. This architecture is naturally well suited for analysing large corpora (collections of text data). When it comes to analysing text, a topic model typically returns a group of words which are semantically linked. These word groups are called topics. As such, topic models are appealing to researchers in fields other than the machine learning and natural language processing (NLP) domains. Domains such as higher education, sociology and finance and economics, which frequently deal with these types of data. The focus of this study is on finance and economic data. Recent advancements in the topic modelling space have sought to improve the quality or interpretability of topics. In particular, I focus on newer topic models which incorporate word embeddings and user guidance that result in models learning better topics. However, the application of the newer approaches, has rarely transitioned outside the machine learning and NLP research fields. Those non-NLP domains have largely been confined to the application of classic Latent Semantic Analysis (LSA) and Latent Dirichlet Allocation (LDA) methods. I extend the work of Hansen et al. (2018), who use traditional LDA to investigate the topical structure underlying the US Federal Open Market Committee (FOMC) transcripts. They were able to show econometrically, that these transcripts contained signals about the behaviour of inexperienced FOMC members (rookies). I take this result and frame it as a machine learning problem, where I task a classifier to predict whether an FOMC member is a rookie, given the text in the transcripts. I then assess the efficacy of the newer topic models against Hansen et al. (2018)’s benchmark LDA model in making these predictions. I also compare the topic quality of the newer topic models against the traditional LDA as measured by metrics in NLP. I find that while the newer topic models improve topic quality versus LDA, they were unable to outperform LDA in the classification task.

History

Table of Contents

1 Introduction -- 2 Background and Literature Review -- 3 Replicating Hansen et al. 2018 -- 4 A classification task -- 5 Competing models: ETM and CatE -- 6 Classification and Topic Quality Results -- 7 Conclusion and Future Research -- Appendix -- Bibliography

Awarding Institution

Macquarie University

Degree Type

Thesis MRes

Department, Centre or School

Department of Computing

Year of Award

2022

Principal Supervisor

Mark Dras

Rights

Copyright: The Author Copyright disclaimer: https://www.mq.edu.au/copyright-disclaimer

Language

English

Extent

112 pages

Usage metrics

    Macquarie University Theses

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC