SlideShare a Scribd company logo
Text Classification
Russ M. Delos Santos
Overview
Research:
Text classification for RAX Studio
Suggested Use Case:
- Account management through email.
Natural Language
Processing
1. Automatic or semi-automatic
processing of human language
2. Can be used for various
applications like
a. Sentiment Analysis
b. Intent Classification
c. Topic Labeling
Data
Pre-process to desired
text format
Features
Transform the text to
vectors (numbers)
Model
Feed the data to the
model
Prediction
Set prediction criteria
once the model
converge
Output
Output the class
General Process
Dataset / Text Corpus
- Dictionary or vocabulary which is used to train
the model
● Either tagged (for supervised learning) or untagged
(for unsupervised).
● Size depends on the algorithm used. Should be pre-
processed to remove unwanted characters, to convert
to wanted format, etc.
Dataset / Text Corpus
- Open-source dataset samples
● Amazon Reviews
● NYSK Dataset (News Articles
● Enrol Email Dataset
● Ling Spam Dataset
Feature Extraction
- Transforms texts to numbers (vector space
model)
- Choices:
● One-hot encoding
● Bag-of-words + TF*IDF
● Word2vec
One-hot encoding
- Creates a binary encoding of words. 1 is
encoded on the index of the word in the corpus
Bag-of-words
- Takes the word count of the target word in the
corpus as the feature
TF*IDF
- Term Frequency * Inverse Document
Frequency
● Frequently occurring words are typically not
important / has less weight (stopwords such as “is, are,
the, etc.”)
● Weights are assigned per word.
TF*IDF
- Term Frequency * Inverse Document
Frequency
BOW + TF*IDF
BOW + TF*IDF
word2vec
Uses the weights of the hidden layer of a neural
network as features of the words
● Can predict a context or a word based on the nearby
words in the corpus
● Uses continuous bag-of-words or skip-gram model + 1-
1-1 neural network.
word2vec
- Gives better semantic/syntactic relationships
of words through vectors
Schedule
Machine Learning Model
- A classifier algorithm that transforms an input
to the desired class
● Naive Bayes
● K-nearest neighbors
● Multilayer Perceptron
● Recurrent Neural Network + Long short-term memory
Naive Bayes
- Probabilistic model that relies on word count
● Uses bag of words as features
● Assumes that the position of words doesn't matter and
words are independent of each other
Naive Bayes
- Probabilistic model that relies on word count
K-Nearest Neighbors
- Classifies the class based on the nearest
distance from a known class
Multilayer Perceptron
- A feed-forward neural network
● Has at least 2 hidden layers
● Sigmoid function - binary classification
● Softmax function - multiclass classification
Multilayer Perceptron
Assessment
Option 1
● Features: BOW + TF*IDF
● ML Algorithm: Naive Bayes
● Pros: Easier to implement
● Cons: Word count instead of
word sequence.
● Ex. ‘Live to eat’ and ‘Eat to live’
may mean the same’
Option 2
● Features: word2vec word
embeddings
● ML Algorithm: Multilayer
Perceptron
● Pros: Produces better results,
semantically and syntactically
● Cons: Needs a big labeled
dataset to perform well
Main Blocks
ML.NET Learning Curve
- Still studying the framework.
- Not as well documented compared to Python
frameworks/libraries
- Ex. Has a method called TextCatalog.FeaturizeText() but there’s
no indication of the kind of feature extraction.
Supervised Learning Needs Big Data
- We can use open-source datasets for benchmark.
- But we need datasets with specific labels for the algorithm to
work.
Main Blocks
Model Update Criteria
- Retraining the model for every unknown word is impractical.
- Suggestion:
- Set a minimum number of occurence of new words before a
model is to be retrained
- Ignore the rare, new words since it may not affect the
entire intent, sentiment, meaning, of the text.
Implementation Plan
- Email Cleaner
- Clean special characters, HTML tags, header and footer of
the email, etc.
- Set a standard file format (tsv, csv, txt, etc. or transform to
bin)
- Use spam dataset for the mean time as benchmark (binary
classification)
- Sentence Tokenizer + Feature Extraction
- Divide emails per sentence + word2vec
- Create Neural Network
- 1 input, 2 hidden, 1 output.
- Activation function - sigmoid
References
[1]D. Jurafsky and J. Martin, Speech and
language processing. Upper Saddle River,
N.J.: Pearson Prentice Hall, 2009.
[2]https://developers.google.com/machi
ne-learning/
[3]bunch of stackoverflow /
stackexchange / Kaggle threads
[4]bunch of Medium posts

More Related Content

Text Classification

  • 2. Overview Research: Text classification for RAX Studio Suggested Use Case: - Account management through email.
  • 3. Natural Language Processing 1. Automatic or semi-automatic processing of human language 2. Can be used for various applications like a. Sentiment Analysis b. Intent Classification c. Topic Labeling
  • 4. Data Pre-process to desired text format Features Transform the text to vectors (numbers) Model Feed the data to the model Prediction Set prediction criteria once the model converge Output Output the class General Process
  • 5. Dataset / Text Corpus - Dictionary or vocabulary which is used to train the model ● Either tagged (for supervised learning) or untagged (for unsupervised). ● Size depends on the algorithm used. Should be pre- processed to remove unwanted characters, to convert to wanted format, etc.
  • 6. Dataset / Text Corpus - Open-source dataset samples ● Amazon Reviews ● NYSK Dataset (News Articles ● Enrol Email Dataset ● Ling Spam Dataset
  • 7. Feature Extraction - Transforms texts to numbers (vector space model) - Choices: ● One-hot encoding ● Bag-of-words + TF*IDF ● Word2vec
  • 8. One-hot encoding - Creates a binary encoding of words. 1 is encoded on the index of the word in the corpus
  • 9. Bag-of-words - Takes the word count of the target word in the corpus as the feature
  • 10. TF*IDF - Term Frequency * Inverse Document Frequency ● Frequently occurring words are typically not important / has less weight (stopwords such as “is, are, the, etc.”) ● Weights are assigned per word.
  • 11. TF*IDF - Term Frequency * Inverse Document Frequency
  • 14. word2vec Uses the weights of the hidden layer of a neural network as features of the words ● Can predict a context or a word based on the nearby words in the corpus ● Uses continuous bag-of-words or skip-gram model + 1- 1-1 neural network.
  • 15. word2vec - Gives better semantic/syntactic relationships of words through vectors
  • 17. Machine Learning Model - A classifier algorithm that transforms an input to the desired class ● Naive Bayes ● K-nearest neighbors ● Multilayer Perceptron ● Recurrent Neural Network + Long short-term memory
  • 18. Naive Bayes - Probabilistic model that relies on word count ● Uses bag of words as features ● Assumes that the position of words doesn't matter and words are independent of each other
  • 19. Naive Bayes - Probabilistic model that relies on word count
  • 20. K-Nearest Neighbors - Classifies the class based on the nearest distance from a known class
  • 21. Multilayer Perceptron - A feed-forward neural network ● Has at least 2 hidden layers ● Sigmoid function - binary classification ● Softmax function - multiclass classification
  • 23. Assessment Option 1 ● Features: BOW + TF*IDF ● ML Algorithm: Naive Bayes ● Pros: Easier to implement ● Cons: Word count instead of word sequence. ● Ex. ‘Live to eat’ and ‘Eat to live’ may mean the same’ Option 2 ● Features: word2vec word embeddings ● ML Algorithm: Multilayer Perceptron ● Pros: Produces better results, semantically and syntactically ● Cons: Needs a big labeled dataset to perform well
  • 24. Main Blocks ML.NET Learning Curve - Still studying the framework. - Not as well documented compared to Python frameworks/libraries - Ex. Has a method called TextCatalog.FeaturizeText() but there’s no indication of the kind of feature extraction. Supervised Learning Needs Big Data - We can use open-source datasets for benchmark. - But we need datasets with specific labels for the algorithm to work.
  • 25. Main Blocks Model Update Criteria - Retraining the model for every unknown word is impractical. - Suggestion: - Set a minimum number of occurence of new words before a model is to be retrained - Ignore the rare, new words since it may not affect the entire intent, sentiment, meaning, of the text.
  • 26. Implementation Plan - Email Cleaner - Clean special characters, HTML tags, header and footer of the email, etc. - Set a standard file format (tsv, csv, txt, etc. or transform to bin) - Use spam dataset for the mean time as benchmark (binary classification) - Sentence Tokenizer + Feature Extraction - Divide emails per sentence + word2vec - Create Neural Network - 1 input, 2 hidden, 1 output. - Activation function - sigmoid
  • 27. References [1]D. Jurafsky and J. Martin, Speech and language processing. Upper Saddle River, N.J.: Pearson Prentice Hall, 2009. [2]https://developers.google.com/machi ne-learning/ [3]bunch of stackoverflow / stackexchange / Kaggle threads [4]bunch of Medium posts

Editor's Notes

  • #5: Key processes needed for natural language processing
  • #7: Can be used for benchmark testing depending on the needed classes
  • #9: Pros: Easy to implement Cons: Outputs a large matrix in which most values are zeroes
  • #10: Pros: Easy since its basicaly just counting Cons: Sequence of words doesnt matter, which is largely erroneous assumption
  • #11: Provides the needed weight per word
  • #15: Sequential word embeddings - order of words matter thus providing semantics and syntax
  • #16: Vector space model
  • #17: Sample implementation result of word2vec in Python. The corpus text is initialized as corpus_raw. As you can see, the ‘daughter’ and ‘infant’ is somehow equally distanced to ‘prisoners’. ‘Kingdom’ is very closely related to ‘madam’ or the queen as told in the story
  • #20: Classified to a class with the highest probability of class | keyword
  • #21: A lazy classifier since only the distance determines the class
  • #22: Deep learning - uses neural network