posted on 2022-03-29, 00:40authored byFiona Martin
This thesis examined whether simple preprocessing of documents such as lemmatising text, or removing or weighting certain parts of speech, could generate better quality topics, faster, using Latent Dirichlet Allocation (LDA) topic modelling. Past work has generally attempted to improve topic modelling performance by making changes to the topic modelling algorithm itself. This study examines the simpler option of transforming the input documents to the LDA algorithm. Topic quality was assessed on a range of measures that examined both topic interpretability, and how well the topics represented the source documents. The results indicate that topic quality was improved, and the time to generate the topics was less, if the input documents were reduced to only nouns, or nouns and adjectives, when the numbers of topics to be generated was 200 or 500 topics. This study also found that even when the number of topics to generate was not large, input documents could be reduced to select parts of speech to speed the generation of topics, with no loss of topic quality. The implications of these results are that very large data sets may benefit from being lemmatised and reduced to simply the nouns prior to topic modelling.