strlen performance implementation

2019-01-24 09:42发布

This is a multipurpose question:

  • How does this compare to the glibc strlen implementation?
  • Is there a better way to to this in general and for autovectorization.

Random filler crap because stackoverflow somehow knows better than me about code to summary ratio

#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#include <limits.h>
#include <stdbool.h>
#include <stdlib.h>

/* Todo: Document */
#define WORD_ONES_LOW   ((size_t)-1 / UCHAR_MAX)
#define WORD_ONES_HIGH  (((size_t)-1 / UCHAR_MAX) << (CHAR_BIT - 1))

/*@doc
 * @desc: see if an arch word has a zero
 * #param: w - string aligned to word size
 */
static inline bool word_has_zero(const size_t *w)
{
    return ((*w - WORD_ONES_LOW) & ~*w & WORD_ONES_HIGH);
}

/*@doc
 * @desc: see POSIX strlen()
 * @param: s - string
 */
size_t strlen(const char *s)
{
    const char *z = s;

    /* Align to word size */
    for (; ((uintptr_t)s & (sizeof(size_t) - 1)) && *s != '\0'; s++);

    if (*s != '\0') {
        const size_t *w;

        for (w = (const size_t *)s; !word_has_zero(w); w++);
        for (s = (const char *)w; *s != '\0'; s++);
    }

    return (s - z);
}

3条回答
爷的心禁止访问
2楼-- · 2019-01-24 10:00

Also, please note this implementation can read past the end of a char array here:

for (w = (const void *)s; !HASZERO(*w); w++);

and therefore relies on undefined behaviour.

查看更多
唯我独甜
3楼-- · 2019-01-24 10:17

To answer your second question, I think the naive byte-based strlen implementation will result in better auto-vectorization by the compiler, if it's smart and support for vector instruction set extensions (e.g. SSE) has been enabled (e.g. with -msse or an appropriate -march). Unfortunately, it won't result in any vectorization with baseline cpus which lack these features, even though the compiler could generate 32- or 64-bit pseudo-vectorized code like the C code cited in the question, if it were smart enough...

查看更多
对你真心纯属浪费
4楼-- · 2019-01-24 10:19

Well, this implementation is based on virtually the same trick (Determine if a word has a zero byte) as the glibc implementation you linked. They do pretty much the same thing, except that in glibc version some loops are unrolled and bit masks are spelled out explicitly. The ONES and HIGHS from the code you posted is exactly himagic = 0x80808080L and lomagic = 0x01010101L form glibc version.

The only difference I see is that glibs version uses a slightly different criterion for detecting a zero byte

if ((longword - lomagic) & himagic)

without doing ... & ~longword (compare to HASZERO(x) macro in your example, which does the same thing with x, but also includes ~(x) member). Apparently glibc authors believed this shorter formula is more efficient. Yet it can result in false positives. So they check for false positives under that if.

It is indeed an interesting question, what is more efficient: a single-stage precise test (your code) or a two-stage test that begins with rough imprecise check followed, if necessary, by a precise second check (glibc code).

If you want to see how they compare in terms of actual performance - time them on your platform and your data. There's no other way.

查看更多
登录 后发表回答