printf and %llu vs %lu on OS X [duplicate]

2019-02-21 18:10发布

Possible Duplicate:
how to printf uint64_t?

Why is it that on my 64-bit Mac (I am using Clang) the uint64_t type is unsigned long long while on 64-bit Ubuntu the uint64_t type is unsigned long?

This makes it very difficult for me to get my printf calls to not give compiler warnings (or indeed even to work) under both environments.

I can try to use macros to try to choose the correct string (#define LU either %llu or %lu, and in the process uglifying the printf strings a bit) but on the Mac I've got a 64-bit word size (so _LP64 would be defined and UINTPTR_MAX != 0xffffffff) and yet it still uses long long for the 64 bit int types.

// printf macro switch (for the uint64_t's)
#if UINTPTR_MAX == 0xffffffff 
   // 32-bit
#  define LU "%llu"
#else 
   // assume 64-bit
   // special case for OS X because it is strange
   // should actually check also for __MACH__ 
#  ifdef __APPLE__
#    define LU "%llu"
#  else
#    define LU "%lu"
#  endif
#endif

5条回答
爷的心禁止访问
2楼-- · 2019-02-21 18:26

Unfortunately the standard's not very specific on the sizes of these types... The only guarantee is that sizeof(int) <= sizeof(long) <= sizeof(long long).

You can use macros like you said, or you could try using %zu or %ju which are used to print out size_t and uintmax_t types (both 64-bit on OS X, haven't tested on Ubuntu). I don't think there are any other options.

查看更多
贼婆χ
3楼-- · 2019-02-21 18:35

The answer is to promote via static cast:

some_type i = 5;
printf("our value is: %llu", (unsigned long long)i);
查看更多
4楼-- · 2019-02-21 18:39

The macros are already defined for you in <cinttypes>. Try

printf("%"PRIu64, x);

Or, even better, use C++ features like

std::cout << x;

which will select the proper << operator for your variable type.

查看更多
疯言疯语
5楼-- · 2019-02-21 18:39

The underlying type of uint64_t can be whatever the implementation like as long as it is in fact 64 bits.

Obviously in C++ the preferred solution is to use iostreams instead of printf as then the problem disappears. But you can always just cast the value passed to printf to make the type always correct:

printf("%llu", static_cast<unsigned long long>(value));

查看更多
祖国的老花朵
6楼-- · 2019-02-21 18:39

I'm sure other people will tell you to use BOOST. So in the interest of providing a solution that is not dependent on BOOST:

I have run into the same problem so often that I gave up and wrote my own helper macros that feed into %s instead of any brand of %llu or %lu or whatever. I also found it helpful for helping maintain a sane format string design, and for providing better (and more consistent) hex and pointer printouts. There are two caveats:

  1. You can't easily combine extra formatting parameters (left/right justification, padding, etc) -- but then you can't really do that with the LU macro either.

  2. this approach does add additional overhead to the task for formatting and printing strings. However, I write performance-critical apps and I haven't noticed it being an issue except in Microsoft's Visual C++ debug builds (which take about 200x longer to allocate and free heap memory than normal because of all the internal validation and corruption checks).

Here's a comparison:

printf( "Value1: " LU ", Value2: " LU, somevar1, somevar2 );

vs.

printf( "Value1: %s, Value2: %s", cStrDec(somevar1), cStrDec(somevar2) );

To make it work, I used a set of macros and templates like this:

#define cStrHex( value )        StrHex    ( value ).c_str()
#define cStrDec( value )        StrDecimal( value ).c_str()

std::string StrDecimal( const uint64_t& src )
{
    return StrFormat( "%u%u", uint32_t(src>>32), uint32_t(src) );
}

std::string StrDecimal( const int64_t& src )
{
    return StrFormat( "%d%u", uint32_t(src>>32), uint32_t(src) );
}

std::string StrDecimal( const uint32_t& src )
{
    return StrFormat( "%u", src );
}

std::string StrDecimal( const int32_t& src )
{
    return StrFormat( "%d", src );
}

std::string StrHex( const uint64_t& src, const char* sep="_" )
{
    return StrFormat( "0x%08x%s%08x", uint32_t(src>>32), sep, uint32_t(src) );
}

std::string StrHex( const int64_t& src, const char* sep="_" )
{
    return StrFormat( "0x%08x%s%08x", uint32_t(src>>32), sep, uint32_t(src) );
}

// Repeat implementations for int32_t, int16_t, int8_t, etc.
// I also did versions for 128-bit and 256-bit SIMD types, since I use those.
// [...]

My string formatting functions are based on the now-preferred method of formatting directly into a std::string, which looks something like this:

std::string StrFormatV( const char* fmt, va_list list )
{
#ifdef _MSC_VER
    int destSize = _vscprintf( fmt, list );
#else
    va_list l2;
    va_copy(l2, list);
    int destSize = vsnprintf( nullptr, 0, fmt, l2 );
    va_end(l2);
#endif
    std::string result;
    result.resize( destSize );
    if (destSize!=0)
            vsnprintf( &result[0], destSize+1, fmt, list );
    return result;
}

std::string StrFormat( const char* fmt, ... )
{
    va_list list;
    va_start( list, fmt );
    std::string result = StrFormatV( fmt, list );
    va_end( list );

    return *this;
}
查看更多
登录 后发表回答