Jump to content

Keyword (linguistics)

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Irrevocabile tempus (talk | contribs) at 10:34, 7 February 2024 (→‎Keyword as a language feature). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

In corpus linguistics a key word is a word which occurs in a text more often than we would expect to occur by chance alone.[1] Key words are calculated by carrying out a statistical test (e.g., loglinear or chi-squared) which compares the word frequencies in a text against their expected frequencies derived in a much larger corpus, which acts as a reference for general language use. Keyness is then the quality a word or phrase has of being "key" in its context.

Keyword as a language feature

Probabilistic methods for keyword extraction are widely used in corpus linguistics, literary and linguistic computing, and digital humanities. These methods, originating from information retrieval and computer science, were initially developed for text indexing and term search. One widely used approach is " Term frequency–inverse document frequency " (tf–idf), which weighs the importance of terms in information retrieval systems. Despite its popularity, tf–idf is considered an empirical method with various possible variations. Studies in this area often focus on quantifying term significance and relevance in retrieval processes, utilizing measures such as frequency, signal-to-noise ratio, and relevance weighting methods. Additionally, fields like computational terminology and machine learning employ statistical measures like chi-squared statistics, pairwise mutual information, Dice coefficient, log-likelihood ratio, and Jaccard similarity for automatic term extraction and feature subset selection.

Keyword extraction in natural language processing and linguistic applications, including text mining, involves extracting valuable insights from large volumes of textual data using machine-driven, human-assisted, or hybrid methods. A key challenge is extracting keywords from texts without prior information. Luhn pioneered unsupervised keyword extraction by leveraging Zipf's frequency analysis, which orders words by occurrence frequency. Zipf's law observes that word frequency is inversely proportional to its rank. Luhn's method involves discarding words at the extremes of the frequency list and considering the rest as keywords.

Another unsupervised probabilistic approach to keyword extraction involves using Shannon's entropy to measure the content of information for each word. Shannon's entropy, widely applied in physics literature, finds relevance in linguistics and natural language studies. Applications include DNA sequence analysis, measuring long-range correlations, language acquisition studies, resolving authorship disputes, communication modeling, and statistical analysis of word roles in corpora.

Statistical keyword extraction methods typically rely on modeling the distribution of term frequencies (i.e. word counts) within a corpus, utilizing document-term matrices. In computational linguistics, the Negative binomial distribution (NBD) was proposed as a candidate for describing natural language data, mirroring its utilization in ecology and biostatistics. This choice is primarily attributed to NBD's ability to account for overdispersion , a common phenomenon observed in word counts. Overdispersion arises from the tendency of content words to aggregate, leading to a skewed distribution of term frequencies within a text. Thus, parameters of NBD, when estimated, may capture this variability. It makes them a useful measure in extracting keywords that effectively represent such salient features of the text as named entities. This approach, adopted from analysis of ecological systems, offers a robust framework for keyword extraction and also sheds light on the underlying statistical properties of linguistic data, contributing to a better understanding of word use dynamics.

Keyword as a textual phenomenon

Compare this with collocation, the quality linking two words or phrases usually assumed to be within a given span of each other. Keyness is a textual feature, not a language feature (so a word has keyness in a certain textual context but may well not have keyness in other contexts, whereas a node and collocate are often found together in texts of the same genre so collocation is to a considerable extent a language phenomenon). The set of keywords found in a given text share keyness, they are co-key. Words typically found in the same texts as a key word are called associates.

In politics, sociology and critical discourse analysis, the key reference for keywords was Raymond Williams (1976), but Williams was resolutely Marxist, and Critical Discourse Analysis has tended to perpetuate this political meaning of the term: keywords are part of ideologies and studying them is part of social criticism. Cultural Studies has tended to develop along similar lines. This stands in stark contrast to present day linguistics which is wary of political analysis, and has tended to aspire to non-political objectivity. The development of technology, new techniques and methodology relating to massive corpora have all consolidated this trend.

There are, however, numerous political dimensions that come into play when keywords are studied in relation to cultures, societies and their histories. The Lublin Ethnolinguistics School studies Polish and European keywords in this fashion. Anna Wierzbicka (1997), probably the best known cultural linguist writing in English today, studies languages as parts of cultures evolving in society and history. And it becomes impossible to ignore politics when keywords migrate from one culture to another. Gianninoto (Underhill & Gianninoto 2019) demonstrates the way political terms like, "citizen" and "individual" are integrated into the Chinese worldview over the course of the 19th and 20th century. She argues that this is part of a complex readjustment of conceptual clusters related to "the people". Keywords like "citizen" generate various translations in Chinese, and are part of an ongoing adaptation to global concepts of individual rights and responsibilities. Understanding keywords in this light becomes crucial for understanding how the politics of China evolves as Communism emerges and as the free market and citizens' rights develop. Underhill (Underhill & Gianninoto 2019) argues that this is part of the complex ways ideological worldviews interact with the language as an ongoing means of perceiving and understanding the world.

Barbara Cassin studies keywords in a more traditional manner, striving to define the words specific to individual cultures, in order to demonstrate that many of our keywords are partially "untranslatable" into their "equivalents. The Greeks may need four words to cover all the meanings English-speakers have in mind when speaking of "love". Similarly, the French find that "liberté" suffices, while English-speakers attribute different associations to "liberty" and "freedom": "freedom of speech" or "freedom of movement", but "the Statue of Liberty".

References

  1. ^ Scott, M. & Tribble, C., 2006, Textual Patterns: keyword and corpus analysis in language education, Amsterdam: Benjamins, 55.

Bibliography

  • Cassin, Barbara, 2014, "Dictionary of Untranslatables", Oxford, Princeton University Press.
  • Scott, M. & Tribble, C., 2006, Textual Patterns: keyword and corpus analysis in language education, Amsterdam: Benjamins, especially chapters 4 & 5.
  • Underhill, James, Gianninoto, Rosamaria, 2019, "Migrating Meanings: Sharing keywords in a global world", Edinburgh: Edinburgh University Press.
  • Wierzbicka, Anna, 1997, "Understanding Cultures through their Key Words", Oxford: Oxford University Press.
  • Williams, Raymond, 1976, "Keywords: A Vocabulary of culture and society", New York: Oxford University Press.