By Sherry He

Bello!👋

In this post, I'm going to talk about LDA (Latent Dirichlet Allocation) topic model and its results on movie trailer comments.

But please play around with the interactive results first:

How does LDA work?

Latent Dirichlet Allocation is a Bayesian topic model by Blei et al., 2003. It has better results compared to previous topic models such as LSI, pLSI, and is now the building block of many technological applications, such as short-text information extraction and recommendation systems based on reviews. Its basic idea is:

  • A corpus is a set of documents (all comments/reviews under a movie). It exhibits multiple topics
  • Each document is a mixture of topics
  • Each topic is a (multinomial) distribution over words
  • Each word is drawn from one of the topics
Another way of thinking it is, if you are a fan of highlighters, and you use different colors to highlight words in different topics, you will get a document like this:

custom image

If you (ask a machine to) persistently highlight all words in a corpus, removing meaningless words, you will get two distributions:

  • Proportion of each topic in the corpus
  • Word distribution within each topic

custom image2

And that's how the model "generates" probablistic word and topic distribution. Yet, in reality, the only observable data is document, and you have to infer the word and topic distributions based on what you see in the document. So the next step will be using Bayesian inference methods to estimate such distributions. There're several parametric and non-parametric Bayesian inference algorithm, such as Variational Inference and Markov-Chain Monte Carlo simulation.

Instead of going deeper into maths, let's move on to our empirical study!

LDA on Comments of Movie Trailers

Because people have drastically different preferences on their entertainment, we think the movie production business is affected by behavioural finance. We selected top eight movie studios in the entertainment industry, including: 20 Century Fox, Disney, Lions Gate, Marvel, Paramount Pictures, Sony Pictures, Universal Studio and Warner Bros. Using Youtube Data API (see our previous post), we extracted all comments under 384 movie trailers.

After preprocessing, we used Python package Gensim to perform LDA topic modelling. Check out our interactive results above!

We can see that topic 2 is about star war (franchised by Disney), topic 6 is about marvel heroes/black panther, topic 3 is about love/drama films, and topic 5 is possibily dominated by dinasours?

The axis x and y are latent vectors in words, thus the distance between topic bubbles represent how they're related, size of each bubble shows how disperse the topic is. From the example, we can see one advantage of LDA is the great interpretability. This is very useful in data communication.

Another advantage (and probably the reason why LDA performs so well) is that unlike previous topic models, which only considers topic distribution in individual documents, LDA adds another hierarchy - corpus level topic distribution. And thus LDA can take into account the relationships between documents. For example, if we consider the collection of Shakepeare as a corpus, and each of his play as a document, then it makes sense that his work shares some common topics (tragic love and life struggle of princes, etc).

Despite LDA's merit, we did not incorporate it into further steps of our study, which relates movie viewers' sentiment to the stock performance of film studios. This is mainly because we aim at a supervised machine learning task, but we do not have ground truth/labelled data - and we cannot manually label each comment we extracted (over 200k in total 😲). Therefore we resorted to Textblob, which offers a pre-trained semantic model using the widely accepted WordNet database.

Nevertheless, I'm happy that we learnt about LDA and did some interesting texual analysis.🦄

Below is the minimal version of Python code to realise the LDA interactive results using Gensim and pyLDAvis (assuming text pre-processing is done).

Note that the corpus is supposed to be large enough, so the all_studio_comment_list variable below is just an illustration.

    
      
```python
# Gensim
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.models import CoherenceModel

# Plotting tools
import pyLDAvis
import pyLDAvis.gensim  # don't skip this
import matplotlib.pyplot as plt

# just an example of corpus
all_studio_comment_list = [[ 'stop','laugh', 'begin'], 
                ['disappoint','garbage','actors','especially','mark','funny','wonder','end','reject','netflix','pile'],
                ['john','cena','description']]

# Create Dictionary
id2word = corpora.Dictionary(all_studio_comment_list)

# Create Corpus
texts = all_studio_comment_list

# Term Document Frequency
corpus = [id2word.doc2bow(text) for text in texts]

# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=all_studio_comment_list,
                                           id2word=id2word,
                                           num_topics=20,    # number of topics
                                           random_state=100,
                                           update_every=1,
                                           chunksize=100,
                                           passes=10,
                                           alpha='auto',     # hyperparameter for Dirichlet distribution, default 1.0/num_topics 
                                           per_word_topics=True)

# Print the top 40 keywords in the 10 topics
pprint(lda_model.print_topics(num_topics=10, num_words=40))

# Visualize the topics
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(lda_model, corpus, id2word) # this might take a while
pyLDAvis.show(vis) # an html will show up, and done!

```