Like Boyer–Moore, Boyer–Moore–Horspool preprocesses the pattern to produce a table containing, for each symbol in the alphabet, the number of characters that can safely be skipped. The preprocessing phase, in pseudocode, is as follows : Unlike the original, we use zero-based indices here. function preprocess T ← new table of 256 integers for i from 0 to 256 exclusive T ← length for i from 0 to length - 1 exclusive T = str2 Note: this is equivalent to !memcmp. if i = 0 The original algorithm tries to play smart here: it checks for the return true last character, and then starts from the first to the second-last. i ← i - 1 return false function search T ← preprocess skip ← 0 while length - skip ≥ length haystack -- substring starting with "skip". &haystack in C. if same return skip skip ← skip + Thaystack[skip + length - 1 returnnot-found
The algorithm performs best with long needle strings, when it consistently hits a non-matching character at or near the final byte of the current position in the haystack and the final byte of the needle does not occur elsewhere within the needle. For instance [a 32 byte needle ending in "z" searching through a 255 byte haystack which does not have a 'z' byte in it would take up to 224 byte comparisons. The best case is the same as for the Boyer–Moore string search algorithm in big O notation, although the constant overhead of initialization and for each loop is less. The worst case behavior happens when the bad character skip is consistently low and a large portion of the needle matches the haystack. The bad character skip is only low, on a partial match, when the final character of the needle also occurs elsewhere within the needle, with 1 byte movement happening when the same byte is in both of the last two positions. The canonical degenerate case similar to the above "best" case is a needle of an 'a' byte followed by 31 'z' bytes in a haystack consisting of 255 'z' bytes. This will do 31 successful byte comparisons, a 1 byte comparison that fails and then move forward 1 byte. This process will repeat 223 more times, bringing the total byte comparisons to 7,168. The worst case is significantly higher than for the Boyer–Moore string search algorithm, although obviously this is hard to achieve in normal use cases. It is also worth noting that this worst case is also the worst case for the naive memmem algorithm, although the implementation of that tends to be significantly optimized.
Tuning the comparison loop
The original algorithm had a more sophisticated same loop. It uses an extra pre-check before proceeding in the positive direction: function same_orig i ← 0 if str = str2 while str1 = str2 if i = len - 2 return true i ← i + 1 return false A tuned version of the BMH algorithm is the Raita algorithm. It adds an additional precheck for the middle character, in the order of last-first-middle. The algorithm enters the full loop only when the check passes: function same_raita i ← 0 mid ← len / 2
Three prechecks. if len ≥ 3 if str != str2 return false if len ≥ 1 if str != str2 return false if len ≥ 2 if str != str2 return false Any old comparison loop. return len < 3 or SAME It is unclear whether this 1992 tuning still holds its performance advantage on modern machines. The rationale by the authors is that actual text usually contains some patterns which can be effectively prefiltered by these three characters. It appears that Raita is not aware of the old last-character precheck, so readers are advised to take the results with a grain of salt. On modern machines, library functions like tends to provide better throughput than any of the hand-written comparison loops. The behavior of an "SFC" loop both in libstdc++ and libc++ seems to suggest that a modern Raita implementation should not include any of the one-character shifts, since they have detrimental effects on data alignment.