|
|
Line 1: |
Line 1: |
| In [[computer science]], '''string searching algorithms''', sometimes called '''string matching algorithms''', are an important class of [[string algorithm]]s that try to find a place where one or several [[string (computer science)|strings]] (also called [[pattern]]s) are found within a larger string or text.
| | Take care that each and essay writing every different education level. This can be negatively impacted by recent changes or a person whose area of law. The first step is usually a solitary activity, please visit The Uni Tutor.<br><br>my web-site - [http://www.essaysthinker.com/ write my essay] |
| | |
| Let Σ be an [[Alphabet (computer science)|alphabet]] ([[finite set]]). Formally, both the pattern and searched text are vectors of elements of Σ. The Σ may be a usual human alphabet (for example, the letters A through Z in the Latin alphabet). Other applications may use ''binary alphabet'' (Σ = {0,1}) or ''DNA alphabet'' (Σ = {A,C,G,T}) in [[bioinformatics]].
| |
| | |
| In practice, how the string is encoded can affect the feasible string search algorithms. In particular if a [[variable width encoding]] is in use then it is slow (time proportional to N) to find the Nth character. This will significantly slow down many of the more advanced search algorithms. A possible solution is to search for the sequence of code units instead, but doing so may produce false matches unless the encoding is specifically designed to avoid it.
| |
| | |
| == Basic classification ==
| |
| The various [[algorithm]]s can be classified by the number of patterns each uses.
| |
| | |
| === Single pattern algorithms ===
| |
| Let ''m'' be the length of the pattern and let ''n'' be the length of the searchable text.
| |
| | |
| {| class="wikitable"
| |
| |-
| |
| ! Algorithm
| |
| ! Preprocessing time
| |
| ! Matching time<sup>1</sup>
| |
| |-
| |
| ! Naïve string search algorithm
| |
| | 0 <!-- that is a zero, not an O --> (no preprocessing)
| |
| | Θ((n−m+1) m)
| |
| |-
| |
| ! [[Rabin–Karp string search algorithm]]
| |
| | Θ(m)
| |
| | average Θ(n+m),<br/>worst Θ((n−m+1) m)
| |
| |-
| |
| ! [[Finite-state machine|Finite-state automaton]] based search
| |
| | Θ(m |Σ|) <!-- vertical bars confuse MediaWiki -->
| |
| | Θ(n)
| |
| |-
| |
| ! [[Knuth–Morris–Pratt algorithm]]
| |
| | Θ(m)
| |
| | Θ(n)
| |
| |-
| |
| ! [[Boyer–Moore string search algorithm]]
| |
| | Θ(m + |Σ|)
| |
| | Ω(n/m), O(nm)
| |
| |-
| |
| ! [[Bitap algorithm]] (''shift-or'', ''shift-and'', ''Baeza–Yates–Gonnet'')
| |
| | Θ(m + |Σ|) <!-- vertical bars confuse MediaWiki -->
| |
| | O(mn)
| |
| |}
| |
| <sup>1</sup>Asymptotic times are expressed using [[Big O notation|O, Ω, and Θ notation]]
| |
| | |
| The '''Boyer–Moore string search algorithm''' has been the standard benchmark for the practical string search literature.<ref name=":0">{{cite journal |last=Hume |last2=Sunday |year=1991 |title=Fast String Searching |journal=Software: Practice and Experience |volume=21 |issue=11 |pages=1221–1248 |doi=10.1002/spe.4380211105 }}</ref>
| |
| | |
| === Algorithms using a finite set of patterns ===
| |
| * [[Aho–Corasick string matching algorithm]]
| |
| * [[Commentz-Walter algorithm]]
| |
| * [[Rabin–Karp string search algorithm]]
| |
| | |
| === Algorithms using an infinite number of patterns ===
| |
| Naturally, the patterns can not be enumerated in this case. They are represented usually by a [[regular grammar]] or [[regular expression]].
| |
| | |
| == Other classification ==
| |
| {{unreferenced section|date=July 2013}}
| |
| Other classification approaches are possible. One of the most common uses preprocessing as main criteria.
| |
| | |
| {| class="wikitable"
| |
| |+Classes of string searching algorithms<ref>Melichar, Borivoj, Jan Holub, and J. Polcar. Text Searching Algorithms. Volume I: Forward String Matching. Vol. 1. 2 vols., 2005. http://stringology.org/athens/TextSearchingAlgorithms/.</ref>
| |
| !
| |
| !Text not preprocessed
| |
| !Text preprocessed
| |
| |-
| |
| ! Patterns not preprocessed
| |
| | Elementary algorithms
| |
| | Index methods
| |
| |-
| |
| ! Patterns preprocessed
| |
| | Constructed search engines
| |
| | Signature methods
| |
| |}
| |
| | |
| === Naïve string search ===
| |
| The simplest and least efficient way to see where one string occurs inside another is to check each place it could be, one by one, to see if it's there. So first we see if there's a copy of the needle in the first character of the haystack; if not, we look to see if there's a copy of the needle starting at the second character of the haystack; if not, we look starting at the third character, and so forth. In the normal case, we only have to look at one or two characters for each wrong position to see that it is a wrong position, so in the average case, this takes [[Big O notation|O]](''n'' + ''m'') steps, where ''n'' is the length of the haystack and ''m'' is the length of the needle; but in the worst case, searching for a string like "aaaab" in a string like "aaaaaaaaab", it takes [[Big O notation|O]](''nm'') | |
| | |
| === Finite state automaton based search ===
| |
| [[Image:DFA search mommy.svg|200px|right]]
| |
| In this approach, we avoid backtracking by constructing a [[deterministic finite automaton]] (DFA) that recognizes stored search string. These are expensive to construct—they are usually created using the [[powerset construction]]—but are very quick to use. For example,
| |
| | |
| ===Stubs===
| |
| [[Knuth–Morris–Pratt algorithm|Knuth–Morris–Pratt]] computes a [[deterministic finite automaton|DFA]] that recognizes inputs with the string to search for as a suffix, [[Boyer–Moore string search algorithm|Boyer–Moore]] starts searching from the end of the needle, so it can usually jump ahead a whole needle-length at each step. Baeza–Yates keeps track of whether the previous ''j'' characters were a prefix of the search string, and is therefore adaptable to [[fuzzy string searching]]. The [[bitap algorithm]] is an application of Baeza–Yates' approach.
| |
| | |
| === Index methods ===
| |
| Faster search algorithms are based on preprocessing of the text. After building a [[substring index]], for example a [[suffix tree]] or [[suffix array]], the occurrences of a pattern can be found quickly. As an example, a suffix tree can be built in <math>\Theta(n)</math> time, and all <math>z</math> occurrences of a pattern can be found in <math>O(m)</math> time under the assumption that the alphabet has a constant size and all inner nodes in the suffix tree knows what leafs are underneath them. The latter can be accomplished by running a DFS algorithm from the root of the suffix tree.
| |
| | |
| === Other variants ===
| |
| Some search methods, for instance [[trigram search]], are intended to find a "closeness" score between the search string and the text rather than a "match/non-match". These are sometimes called [[Approximate_string_matching|"fuzzy" searches]].
| |
| | |
| ==See also==
| |
| *[[Sequence alignment]]
| |
| *[[Pattern matching]]
| |
| *[[Compressed pattern matching]]
| |
| *[[Approximate string matching]]
| |
| | |
| ==Academic conferences on text searching==
| |
| *[[Combinatorial pattern matching]] (CPM), a conference on combinatorial algorithms for strings, sequences, and trees.
| |
| *[[String Processing and Information Retrieval]] (SPIRE), an annual symposium on string processing and information retrieval.
| |
| *[[Prague Stringology Conference]] (PSC), an annual conference on algorithms on strings and sequences.
| |
| *[[Competition on Applied Text Searching]] (CATS), an annual series of evaluations of text searching algorithms.
| |
| | |
| ==References==
| |
| <references />
| |
| *R. S. Boyer and J. S. Moore, ''[http://www.cs.utexas.edu/~moore/publications/fstrpos.pdf A fast string searching algorithm],'' Carom. ACM 20, (10), 262–272(1977).
| |
| * [[Thomas H. Cormen]], [[Charles E. Leiserson]], [[Ronald L. Rivest]], and [[Clifford Stein]]. ''[[Introduction to Algorithms]]'', Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Chapter 32: String Matching, pp.906–932.
| |
| | |
| ==External links==
| |
| * [http://www.cs.ucr.edu/%7Estelo/pattern.html Huge (maintained) list of pattern matching links] Last updated:12/27/2008 20:18:38
| |
| * [http://johannburkard.de/software/stringsearch/ StringSearch – high-performance pattern matching algorithms in Java] – Implementations of many String-Matching-Algorithms in Java (BNDM, Boyer-Moore-Horspool, Boyer-Moore-Horspool-Raita, Shift-Or)
| |
| * [http://www-igm.univ-mlv.fr/~lecroq/string/index.html Exact String Matching Algorithms] — Animation in Java, Detailed description and C implementation of many algorithms.
| |
| * [http://www.concentric.net/~Ttwang/tech/stringscan.htm Boyer-Moore-Raita-Thomas]
| |
| * [http://www.cs.ucr.edu/~stelo/cpm/cpm04/35_Navarro.pdf (PDF) Improved Single and Multiple Approximate String Matching]
| |
| * [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2647288/ Kalign2: high-performance multiple alignment of protein and nucleotide sequences allowing external features]
| |
| | |
| [[Category:String matching algorithms| ]]
| |
Take care that each and essay writing every different education level. This can be negatively impacted by recent changes or a person whose area of law. The first step is usually a solitary activity, please visit The Uni Tutor.
my web-site - write my essay