This is a multipurpose question:
- How does this compare to the glibc strlen implementation?
- Is there a better way to to this in general and for autovectorization.
Random filler crap because stackoverflow somehow knows better than me about code to summary ratio
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
#include <stdbool.h>
#include <stdlib.h>
/* Todo: Document */
#define WORD_ONES_LOW ((size_t)-1 / UCHAR_MAX)
#define WORD_ONES_HIGH (((size_t)-1 / UCHAR_MAX) << (CHAR_BIT - 1))
/*@doc
* @desc: see if an arch word has a zero
* #param: w - string aligned to word size
*/
static inline bool word_has_zero(const size_t *w)
{
return ((*w - WORD_ONES_LOW) & ~*w & WORD_ONES_HIGH);
}
/*@doc
* @desc: see POSIX strlen()
* @param: s - string
*/
size_t strlen(const char *s)
{
const char *z = s;
/* Align to word size */
for (; ((uintptr_t)s & (sizeof(size_t) - 1)) && *s != '\0'; s++);
if (*s != '\0') {
const size_t *w;
for (w = (const size_t *)s; !word_has_zero(w); w++);
for (s = (const char *)w; *s != '\0'; s++);
}
return (s - z);
}
Also, please note this implementation can read past the end of a char array here:
and therefore relies on undefined behaviour.
To answer your second question, I think the naive byte-based
strlen
implementation will result in better auto-vectorization by the compiler, if it's smart and support for vector instruction set extensions (e.g. SSE) has been enabled (e.g. with-msse
or an appropriate-march
). Unfortunately, it won't result in any vectorization with baseline cpus which lack these features, even though the compiler could generate 32- or 64-bit pseudo-vectorized code like the C code cited in the question, if it were smart enough...Well, this implementation is based on virtually the same trick (Determine if a word has a zero byte) as the glibc implementation you linked. They do pretty much the same thing, except that in glibc version some loops are unrolled and bit masks are spelled out explicitly. The
ONES
andHIGHS
from the code you posted is exactlyhimagic = 0x80808080L
andlomagic = 0x01010101L
form glibc version.The only difference I see is that glibs version uses a slightly different criterion for detecting a zero byte
without doing
... & ~longword
(compare toHASZERO(x)
macro in your example, which does the same thing withx
, but also includes~(x)
member). Apparently glibc authors believed this shorter formula is more efficient. Yet it can result in false positives. So they check for false positives under thatif
.It is indeed an interesting question, what is more efficient: a single-stage precise test (your code) or a two-stage test that begins with rough imprecise check followed, if necessary, by a precise second check (glibc code).
If you want to see how they compare in terms of actual performance - time them on your platform and your data. There's no other way.