Jump to content

Approximate string matching

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by Johannnes89 (talk | contribs) at 08:27, 23 August 2024 (Reverted edit by 2405:9800:B530:6EA1:18BD:6139:EDF6:9BBC (talk) to last version by Gamapamani). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
A fuzzy Mediawiki search for "angry emoticon" has as a suggested result "andré emotions"

In computer science, approximate string matching (often colloquially referred to as fuzzy string searching) is the technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately.

Overview

[edit]

The closeness of a match is measured in terms of the number of primitive operations necessary to convert the string into an exact match. This number is called the edit distance between the string and the pattern. The usual primitive operations are:[1]

  • insertion: cotcoat
  • deletion: coatcot
  • substitution: coatcost

These three operations may be generalized as forms of substitution by adding a NULL character (here symbolized by *) wherever a character has been deleted or inserted:

  • insertion: co*tcoat
  • deletion: coatco*t
  • substitution: coatcost

Some approximate matchers also treat transposition, in which the positions of two letters in the string are swapped, to be a primitive operation.[1]

  • transposition: costcots

Different approximate matchers impose different constraints. Some matchers use a single global unweighted cost, that is, the total number of primitive operations necessary to convert the match to the pattern. For example, if the pattern is coil, foil differs by one substitution, coils by one insertion, oil by one deletion, and foal by two substitutions. If all operations count as a single unit of cost and the limit is set to one, foil, coils, and oil will count as matches while foal will not.

Other matchers specify the number of operations of each type separately, while still others set a total cost but allow different weights to be assigned to different operations. Some matchers permit separate assignments of limits and weights to individual groups in the pattern.

Problem formulation and algorithms

[edit]

One possible definition of the approximate string matching problem is the following: Given a pattern string and a text string , find a substring in T, which, of all substrings of T, has the smallest edit distance to the pattern P.

A brute-force approach would be to compute the edit distance to P for all substrings of T, and then choose the substring with the minimum distance. However, this algorithm would have the running time O(n3 m).

A better solution, which was proposed by Sellers,[2] relies on dynamic programming. It uses an alternative formulation of the problem: for each position j in the text T and each position i in the pattern P, compute the minimum edit distance between the i first characters of the pattern, , and any substring of T that ends at position j.

For each position j in the text T, and each position i in the pattern P, go through all substrings of T ending at position j, and determine which one of them has the minimal edit distance to the i first characters of the pattern P. Write this minimal distance as E(ij). After computing E(ij) for all i and j, we can easily find a solution to the original problem: it is the substring for which E(mj) is minimal (m being the length of the pattern P.)

Computing E(mj) is very similar to computing the edit distance between two strings. In fact, we can use the Levenshtein distance computing algorithm for E(mj), the only difference being that we must initialize the first row with zeros, and save the path of computation, that is, whether we used E(i − 1,j), E(i,j − 1) or E(i − 1,j − 1) in computing E(ij).

In the array containing the E(xy) values, we then choose the minimal value in the last row, let it be E(x2y2), and follow the path of computation backwards, back to the row number 0. If the field we arrived at was E(0, y1), then T[y1 + 1] ... T[y2] is a substring of T with the minimal edit distance to the pattern P.

Computing the E(xy) array takes O(mn) time with the dynamic programming algorithm, while the backwards-working phase takes O(n + m) time.

Another recent idea is the similarity join. When matching database relates to a large scale of data, the O(mn) time with the dynamic programming algorithm cannot work within a limited time. So, the idea is to reduce the number of candidate pairs, instead of computing the similarity of all pairs of strings. Widely used algorithms are based on filter-verification, hashing, Locality-sensitive hashing (LSH), Tries and other greedy and approximation algorithms. Most of them are designed to fit some framework (such as Map-Reduce) to compute concurrently.

On-line versus off-line

[edit]

Traditionally, approximate string matching algorithms are classified into two categories: on-line and off-line. With on-line algorithms the pattern can be processed before searching but the text cannot. In other words, on-line techniques do searching without an index. Early algorithms for on-line approximate matching were suggested by Wagner and Fischer[3] and by Sellers.[2] Both algorithms are based on dynamic programming but solve different problems. Sellers' algorithm searches approximately for a substring in a text while the algorithm of Wagner and Fischer calculates Levenshtein distance, being appropriate for dictionary fuzzy search only.

On-line searching techniques have been repeatedly improved. Perhaps the most famous improvement is the bitap algorithm (also known as the shift-or and shift-and algorithm), which is very efficient for relatively short pattern strings. The Bitap algorithm is the heart of the Unix searching utility agrep. A review of on-line searching algorithms was done by G. Navarro.[4]

Although very fast on-line techniques exist, their performance on large data is unacceptable. Text preprocessing or indexing makes searching dramatically faster. Today, a variety of indexing algorithms have been presented. Among them are suffix trees,[5] metric trees[6] and n-gram methods.[7][8] A detailed survey of indexing techniques that allows one to find an arbitrary substring in a text is given by Navarro et al.[7] A computational survey of dictionary methods (i.e., methods that permit finding all dictionary words that approximately match a search pattern) is given by Boytsov.[9]

Applications

[edit]

Common applications of approximate matching include spell checking.[5] With the availability of large amounts of DNA data, matching of nucleotide sequences has become an important application.[1] Approximate matching is also used in spam filtering.[5] Record linkage is a common application where records from two disparate databases are matched.

String matching cannot be used for most binary data, such as images and music. They require different algorithms, such as acoustic fingerprinting.

A common command-line tool fzf is often used to integrate approximate string searching into various command-line applications.[10]

See also

[edit]

References

[edit]

Citations

[edit]
  1. ^ a b c Cormen & Leiserson 2001.
  2. ^ a b Sellers 1980.
  3. ^ Wagner & Fischer 1974.
  4. ^ Navarro 2001.
  5. ^ a b c Gusfield 1997.
  6. ^ Baeza-Yates & Navarro 1998.
  7. ^ a b Navarro et al. 2001.
  8. ^ Zobel & Dart 1995.
  9. ^ Boytsov 2011.
  10. ^ "Fzf - A Quick Fuzzy File Search from Linux Terminal". www.tecmint.com. 2018-11-08. Retrieved 2022-09-08.

Works cited

[edit]

Further reading

[edit]
[edit]