Bag-of-words model: Difference between revisions
m Dating maintenance tags: {{Clarify}} |
m lk |
||
Line 2: | Line 2: | ||
{{for|Bag-of-words model in computer vision|Bag-of-words model in computer vision}} |
{{for|Bag-of-words model in computer vision|Bag-of-words model in computer vision}} |
||
The '''bag-of-words model''' is a model of text which uses a representation of text that is based on an unordered collection (or "[[multiset|bag]]") of words. It is used in [[natural language processing]] and [[information retrieval]] (IR). It disregards word order (and thus any non-trivial notion of grammar{{clarify|date=December 2023}}) but captures [[Multiplicity (mathematics)|multiplicity]]. The bag-of-words model has also been [[Bag-of-words model in computer vision|used for computer vision]].<ref name=sivic>{{cite conference |
The '''bag-of-words model''' is a model of text which uses a representation of text that is based on an unordered collection (or "[[multiset|bag]]") of words. It is used in [[natural language processing]] and [[information retrieval]] (IR). It disregards [[word order]] (and thus any non-trivial notion of grammar{{clarify|date=December 2023}}) but captures [[Multiplicity (mathematics)|multiplicity]]. The bag-of-words model has also been [[Bag-of-words model in computer vision|used for computer vision]].<ref name=sivic>{{cite conference |
||
| first = Josef |
| first = Josef |
||
| last = Sivic |
| last = Sivic |
Revision as of 06:36, 4 April 2024
The bag-of-words model is a model of text which uses a representation of text that is based on an unordered collection (or "bag") of words. It is used in natural language processing and information retrieval (IR). It disregards word order (and thus any non-trivial notion of grammar[clarification needed]) but captures multiplicity. The bag-of-words model has also been used for computer vision.[1]
The bag-of-words model is commonly used in methods of document classification where, for example, the (frequency of) occurrence of each word is used as a feature for training a classifier.[2]
An early reference to "bag of words" in a linguistic context can be found in Zellig Harris's 1954 article on Distributional Structure.[3]
Example implementation
The following models a text document using bag-of-words. Here are two simple text documents:
(1) John likes to watch movies. Mary likes movies too.
(2) Mary also likes to watch football games.
Based on these two text documents, a list is constructed as follows for each document:
"John","likes","to","watch","movies","Mary","likes","movies","too"
"Mary","also","likes","to","watch","football","games"
Representing each bag-of-words as a JSON object, and attributing to the respective JavaScript variable:
BoW1 = {"John":1,"likes":2,"to":1,"watch":1,"movies":2,"Mary":1,"too":1};
BoW2 = {"Mary":1,"also":1,"likes":1,"to":1,"watch":1,"football":1,"games":1};
Each key is the word, and each value is the number of occurrences of that word in the given text document.
The order of elements is free, so, for example {"too":1,"Mary":1,"movies":2,"John":1,"watch":1,"likes":2,"to":1}
is also equivalent to BoW1. It is also what we expect from a strict JSON object representation.
Note: if another document is like a union of these two,
(3) John likes to watch movies. Mary likes movies too. Mary also likes to watch football games.
its JavaScript representation will be:
BoW3 = {"John":1,"likes":3,"to":2,"watch":2,"movies":2,"Mary":2,"too":1,"also":1,"football":1,"games":1};
So, as we see in the bag algebra, the "union" of two documents in the bags-of-words representation is, formally, the disjoint union, summing the multiplicities of each element.
.
Application
Implementations of the bag-of-words model might involve using frequencies of words in a document to represent its contents. The frequencies can be "normalized" by the inverse of document frequency, or tf–idf. Additionally, for the specific purpose of classification, supervised alternatives have been developed to account for the class label of a document.[4] Lastly, binary (presence/absence or 1/0) weighting is used in place of frequencies for some problems (e.g., this option is implemented in the WEKA machine learning software system).
Python implementation
# Make sure to install the necessary packages first
# pip install --upgrade pip
# pip install tensorflow
from tensorflow import keras
from typing import List
from keras.preprocessing.text import Tokenizer
sentence = ["John likes to watch movies. Mary likes movies too."]
def print_bow(sentence: List[str]) -> None:
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentence)
sequences = tokenizer.texts_to_sequences(sentence)
word_index = tokenizer.word_index
bow = {}
for key in word_index:
bow[key] = sequences[0].count(word_index[key])
print(f"Bag of word sentence 1:\n{bow}")
print(f"We found {len(word_index)} unique tokens.")
print_bow(sentence)
Hashing trick
A common alternative to using dictionaries is the hashing trick, where words are mapped directly to indices with a hashing function.[5] Thus, no memory is required to store a dictionary. Hash collisions are typically dealt via freed-up memory to increase the number of hash buckets[clarification needed]. In practice, hashing simplifies the implementation of bag-of-words models and improves scalability.
See also
- Additive smoothing
- Bag-of-words model in computer vision
- Document classification
- Document-term matrix
- Feature extraction
- Hashing trick
- Machine learning
- MinHash
- n-gram
- Natural language processing
- Vector space model
- w-shingling
- tf-idf
Notes
- ^ Sivic, Josef (April 2009). "Efficient visual search of videos cast as text retrieval" (PDF). IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 31, NO. 4. opposition. pp. 591–605.
- ^ McTear et al 2016, p. 167.
- ^ Harris, Zellig (1954). "Distributional Structure". Word. 10 (2/3): 146–62. doi:10.1080/00437956.1954.11659520.
And this stock of combinations of elements becomes a factor in the way later choices are made ... for language is not merely a bag of words but a tool with particular properties which have been fashioned in the course of its use
- ^ Youngjoong Ko (2012). "A study of term weighting schemes using class information for text classification". SIGIR'12. ACM.
- ^ Weinberger, K. Q.; Dasgupta A.; Langford J.; Smola A.; Attenberg, J. (2009). "Feature hashing for large scale multitask learning". Proceedings of the 26th Annual International Conference on Machine Learning. pp. 1113–1120. arXiv:0902.2206. Bibcode:2009arXiv0902.2206W. doi:10.1145/1553374.1553516. ISBN 9781605585161. S2CID 291713.
References
- McTear, Michael (et al) (2016). The Conversational Interface. Springer International Publishing.