The Text Classification slides contains the research results about the possible natural language processing algorithms. Specifically, it contains the brief overview of the natural language processing steps, the common algorithms used to transform words into meaningful vectors/data, and the algorithms used to learn and classify the data.
To learn more about RAX Automation Suite, visit: www.raxsuite.com
3. Natural Language
Processing
1. Automatic or semi-automatic
processing of human language
2. Can be used for various
applications like
a. Sentiment Analysis
b. Intent Classification
c. Topic Labeling
4. Data
Pre-process to desired
text format
Features
Transform the text to
vectors (numbers)
Model
Feed the data to the
model
Prediction
Set prediction criteria
once the model
converge
Output
Output the class
General Process
5. Dataset / Text Corpus
- Dictionary or vocabulary which is used to train
the model
● Either tagged (for supervised learning) or untagged
(for unsupervised).
● Size depends on the algorithm used. Should be pre-
processed to remove unwanted characters, to convert
to wanted format, etc.
10. TF*IDF
- Term Frequency * Inverse Document
Frequency
● Frequently occurring words are typically not
important / has less weight (stopwords such as “is, are,
the, etc.”)
● Weights are assigned per word.
14. word2vec
Uses the weights of the hidden layer of a neural
network as features of the words
● Can predict a context or a word based on the nearby
words in the corpus
● Uses continuous bag-of-words or skip-gram model + 1-
1-1 neural network.
17. Machine Learning Model
- A classifier algorithm that transforms an input
to the desired class
● Naive Bayes
● K-nearest neighbors
● Multilayer Perceptron
● Recurrent Neural Network + Long short-term memory
18. Naive Bayes
- Probabilistic model that relies on word count
● Uses bag of words as features
● Assumes that the position of words doesn't matter and
words are independent of each other
21. Multilayer Perceptron
- A feed-forward neural network
● Has at least 2 hidden layers
● Sigmoid function - binary classification
● Softmax function - multiclass classification
23. Assessment
Option 1
● Features: BOW + TF*IDF
● ML Algorithm: Naive Bayes
● Pros: Easier to implement
● Cons: Word count instead of
word sequence.
● Ex. ‘Live to eat’ and ‘Eat to live’
may mean the same’
Option 2
● Features: word2vec word
embeddings
● ML Algorithm: Multilayer
Perceptron
● Pros: Produces better results,
semantically and syntactically
● Cons: Needs a big labeled
dataset to perform well
24. Main Blocks
ML.NET Learning Curve
- Still studying the framework.
- Not as well documented compared to Python
frameworks/libraries
- Ex. Has a method called TextCatalog.FeaturizeText() but there’s
no indication of the kind of feature extraction.
Supervised Learning Needs Big Data
- We can use open-source datasets for benchmark.
- But we need datasets with specific labels for the algorithm to
work.
25. Main Blocks
Model Update Criteria
- Retraining the model for every unknown word is impractical.
- Suggestion:
- Set a minimum number of occurence of new words before a
model is to be retrained
- Ignore the rare, new words since it may not affect the
entire intent, sentiment, meaning, of the text.
26. Implementation Plan
- Email Cleaner
- Clean special characters, HTML tags, header and footer of
the email, etc.
- Set a standard file format (tsv, csv, txt, etc. or transform to
bin)
- Use spam dataset for the mean time as benchmark (binary
classification)
- Sentence Tokenizer + Feature Extraction
- Divide emails per sentence + word2vec
- Create Neural Network
- 1 input, 2 hidden, 1 output.
- Activation function - sigmoid
27. References
[1]D. Jurafsky and J. Martin, Speech and
language processing. Upper Saddle River,
N.J.: Pearson Prentice Hall, 2009.
[2]https://developers.google.com/machi
ne-learning/
[3]bunch of stackoverflow /
stackexchange / Kaggle threads
[4]bunch of Medium posts
Editor's Notes
#5: Key processes needed for natural language processing
#7: Can be used for benchmark testing depending on the needed classes
#9: Pros: Easy to implementCons: Outputs a large matrix in which most values are zeroes
#10: Pros: Easy since its basicaly just countingCons: Sequence of words doesnt matter, which is largely erroneous assumption
#17: Sample implementation result of word2vec in Python. The corpus text is initialized as corpus_raw. As you can see, the ‘daughter’ and ‘infant’ is somehow equally distanced to ‘prisoners’. ‘Kingdom’ is very closely related to ‘madam’ or the queen as told in the story
#20: Classified to a class with the highest probability of class | keyword
#21: A lazy classifier since only the distance determines the class