SIMD-friendly algorithms for substring searching (2016)
(0x80.pl)219 points by Rendello 2 days ago
219 points by Rendello 2 days ago
Interesting. How do you know the background distribution of bytes? Won't a scan of the haystack to get the distribution take up a significant amount of time?
It's a _background_ distribution. That is, a guess. A heuristic. It doesn't change with the input.
I implemented these in my attempt to SIMD optimize Wasm/WASI libc.
https://github.com/ncruces/go-sqlite3/tree/main/sqlite3/libc
For known haystack lengths, and large needles, I found it useful to augment the approach with Quick Search.
C# has implemented some SIMD for IndexOf: https://github.com/dotnet/runtime/pull/63285
I implemented many different algorithms for searching and splitting strings using SMID methods as well: https://github.com/naver/tamgu/blob/06aedf2f14895925d7b5a8e2... I used a different algorithm than the ones presented here.
What I do is to build a substring out of the initial strings that is a multiple of 2. If the string I try to search is 9 characters long, then I extract an 8 characters substrings that I transform into an integer over an integer:
Here is the example for 4 characters: //We look for the presence of four characters in a row ``` int32_t cc = search[3]; for (shift = 2; shift >= 0; shift--) { cc <<= 8; cc |= search[shift]; } __m256i firstchar = _mm256_set1_epi16(cc); ```
In this case, I will look for a 4 bytes integers over my sequence: ``` current_bytes = _mm256_cmpeq_epi16(firstchar, current_bytes); q <<= 1; q |= _mm256_movemask_epi8(current_bytes); ```` I'm looking for blocks of 4 characters at a time in my string.
I tried to implement a modified version for LZ77 window search with Zig's generic SIMD a few years ago:
I always took these as "idealized" algorithms or pseudo-code. In addition to not doing correct unaligned loads (using `memcpy`), the boundary checks are wrong. The code assumes the length of the haystack is a multiple of 8. Additionally, if the needle is empty, you'll get unsigned integer overflow leading to an out-of-bounds access on the needle. That's three different ways to get UB.
This is pretty common in my experience when demonstrating SIMD code. Often only the relevant bits are shown, and not a lot of effort is given to getting the boundary cases correct. It's rarely acknowledged explicitly, but I would assume it's because the boundary conditions are somewhat less interesting than the "meat" of how to use vectors in an interesting way.
When I modified the idealized algorithm to suit my needs a few years ago, I learned just how painful the boundary cases are. I realized I didn't want to ever touch SIMD string searching unless:
- I had a very specific use-case that I really needed SIMD for and was willing to really invest a lot in it; or
- The work had already been done for me in a generic sense and I could reap the fruits of that labour (ripgrep / Rust Regex, thanks for that by the way)
Reminds me of https://github.com/nikneym/hparse
Which uses SIMD algos for fast HTTP parsing.
If you're interested in a rough benchmark, I compared musl's “two way” with a variation on this algorithm for my SIMD optimized libc linked in a sister comment.
The benchmark involves finding this line in that file: https://github.com/ncruces/go-sqlite3/blob/v0.26.1/sqlite3/l...
The improvements over musl are in this table: https://github.com/ncruces/go-sqlite3/tree/v0.26.1/sqlite3/l...
I'd say it's a decent improvement, if not spectacular. musl does particularly well with known length strings (memmem), whereas I'm allowed to cheat for unknown length strings (strstr) because the Wasm environment allows me to skirt some UB.
The NUL terminated string curve ball wrecks many good algorithms.
Would be good to have more SIMD algorithms in smart. Maybe I'll have a play with it if I get time.
Looking at them, would need a little work to make them safe (not reading past needle or haystack). Probably not too much effort, may need a different search for data mod block size at the end.
It was actually 2016 with spelling fixes in 2018. They still missed a couple of errors:
"comparing two substrings is expansive" -> expensive
And:
"memcmp costs are simply ridden off" -> written
Interesting how bad buldozer was for stdlib, but with SSE2 it was more competitive with Westmere.
Here, it's often the opposite.
These algorithms are basically brute force with very little setup, and can become quadratic for large, periodic needles.
String search algorithms that avoid quadratic behaviour (and may even be sublinear) have an higher cost setup that explores needle structure
What? It all depends on the sizes of the strings. Hardware benchmarks, for each significant hardware µarchitectures, must be done on combinations of string sizes in order to know "when" the SIMD setup code is worth it.
Often the setup code is not worth it, even the SIMD complexity is not bringing enough performance in many use cases (basically, kept for niche applications, little lib statically linked for those).
Try Nim. It's pretty easy to get the hang of it for simple things, and you can make a python module too.
Hi Austin! Was just checking your blog and the vowels detection post (< https://austinhenley.com/blog/vowels.html>).
What exactly do you mean by “use SIMD directly without calling out to another language”?
In some way Assembly will probably anyways be another language… but that’s a technicality.
I guess the spectrum of SIMD-related work in relation to Python is quite broad. There are projects like PeachPy, to help one write x86 Asm in Python, new Python’esque languages like Mojo, or SIMD libraries with thin CPython bindings. Do you mean one of those?
PS: I’m in the last camp. And in case you are focused on ASCII, for vowel search, have you tried StringZilla’s `find_first_of` (< https://github.com/ashvardanian/StringZilla/blob/960967d78c7...>)? I think it should perform well ;)
why would you want to? if you are performance bound with Python code, you can pick up a 20-50x performance improvement by switching language.
Right, but if there's only a small portion of my code that does string search and it's a hot path, it would still be much much much more convenient to access SIMD-based string search code direct from Python rather than writing the code (LLM or not) in another language and then construct bindings (LLM or not).
The "AVX2 (generic)" approach is roughly what ripgrep uses (via Rust's `regex` crate) to accelerate most searches. Even something like `\w+\s+Sherlock\s+\w+` will benefit since ripgrep will pluck `Sherlock` out of the regex and search that.
The actual implementation is here: https://github.com/BurntSushi/memchr?tab=readme-ov-file#algo...
The main difference with the algorithm presented here is that instead of always using the first and last bytes of the needle, a heuristic is used to try to pick two bytes that occur less frequently according to a background frequency distribution.
It ends up being quite a bit faster than just plain Two-Way or even GNU libc's implementation of `memmem`. From the root of the `memchr` repository:
The "prebuilt" benchmarks also demonstrate the API deficiencies of routines like `memmem`. Because of their API, they need to rebuild the state necessary to execute a search for the needle given. But it is often the case that the needle is invariant across repeated calls with different haystacks. So you end up doing that repeated work for no gain.