Cross Compile or Compile Native for CPU Arch

2020-04-14 04:02发布

When writing software that is CPU arch dependent, such as C code running on x86 or C code running on ARM cpus. There generally is two ways to go about compiling this code, either Cross-Compile to the ARM CPU arch (if you're developing on an x86 system for example) or copy your code to a native arch cpu system and compile it naively.

I'm wondering if there is a benefit to the native approach vs the cross-compile approach? I noticed that the Fedora ARM team is using a build-server cluster of slow/low power ARM devices to "naively" compile their Fedora ARM spin... surely a project backed by Red Hat has access to some powerful build servers running x86 cpus that could get the job done in 1/2 the time... so why their choice? Am I missing something by cross-compiling my software?

5条回答
该账号已被封号
2楼-- · 2020-04-14 04:21

No technically you're not missing anything by cross-compiling within the context of .c -> .o -> a.out (or whatever); A cross compiler will give you the same binary as a native compiler (versions etc. notwithstanding)

The "advantages" of building natively come from post-compile testing and managing complex systems.

1) If I can run unit-tests quickly after compiling I can get to any bugs/issues quickly the cycle is presumably shorter than the cross-compiling cycle;

2) if I am compiling some target software that has 3rd-party libraries that it uses, then building, deploying and then using them to build my target would probably be easier on native platform; I don't want to deal with the cross-compile builds of those because half of them have build processes written by crazy monkeys that make cross compiling them a pain.

Typically for most things one would try to get to a base build and the compile the rest natively. Unless I have a sick set up where my cross compiler is super wicked fast and I the time I save there is worth the set up required to make the rest of the things (such as unit testing and dependencies management) easier.

At least those are my thoughts

查看更多
三岁会撩人
3楼-- · 2020-04-14 04:24

The main benefit is that all ./configure scripts do not need to be tweaked when running natively. If you are using a shadow rootfs, then you still have configurations running uname to detect the CPU type, etc. For instance, see this question. pkgconfig and other tools try to ease cross-building, but packages normally get native-building on x86 correct first, and then maybe native-building on ARM. cross-building can be painful as each package may need individual tweaks.

Finally, if you are doing profile guided optimizations and running test suitesas per Joachim, it is pretty much impossible to do this in a cross build environment.

Compile speed on the ARM is significantly faster than the human package builders, read configure, edit configure, re-run configure, compile, link cycles.

This also fits well with a continuous integration strategy. Various packages, especially libraries, can be built/deployed/tested quickly. The testing of libraries may involve hundreds of dependent packages. Arm Linux distrubutions will typically need to prototype changes when upgrading and patching a base library which may have hundreds of dependent packages that at least need retesting. A slow cycle done by a computer is always better than a fast compile followed by manual human intervention.

查看更多
倾城 Initia
4楼-- · 2020-04-14 04:25

Although many people think "local compile" benefits more or at least it has no difference compared to "cross compile", the truth is quite the contrary.

For people who works on lower level, i.e. linux kernel, they usually suffer from copy around compile platform. Take x86 and ARM as example, direct idea is building ARM compile base, but it is a bad idea.

Binary is not same sometimes, for example,

# diff hello_x86.ko hello_arm.ko
Binary files hello_x86.ko and hello_arm.ko differ
# diff hello_x86_objdump.txt hello_arm_objdump.txt
2c8
< hello_x86.ko:     file format elf64-littleaarch64
---
> hello_arm.ko:     file format elf64-littleaarch64
26,27c26,27
<    8: 91000000        add     x0, x0, #0x0
<    c: 910003fd        mov     x29, sp
---
>    8: 910003fd        mov     x29, sp
>    c: 91000000        add     x0, x0, #0x0

Generally higher level app is OK to use both, lower level (hardware related) work is suggested to use x86 "cross compile" since it has much better toolchain.

Anyway, compile is a work about GCC Glibc and lib.so, and if one is familiar with these, either way should be easy to go.

PS: Below is the source code

# cat hello.c
#include <linux/module.h>      /* Needed by all modules */
#include <linux/kernel.h>      /* Needed for KERN_ALERT */
#include <linux/init.h>        /* Needed for the macros */



static int hello3_data __initdata = 3;


static int __init hello_3_init(void)
{
   printk(KERN_ALERT "Hello, world %d\n", hello3_data);
   return 0;
}


static void __exit hello_3_exit(void)
{
   printk(KERN_ALERT "Goodbye, world 3\n");
}


module_init(hello_3_init);
module_exit(hello_3_exit);

MODULE_LICENSE("GPL"); 
查看更多
手持菜刀,她持情操
5楼-- · 2020-04-14 04:28

It depends a lot on the compiler. How does the toolchain handle the difference between native and cross compile. Is it simply a case of the toolchain always thinks it being built as a cross compiler, but one way to build it is to let the configure script auto-detect the host rather than you doing it manually (and auto-set the prefix, etc)?

Dont assume that just because it is built to be a native compiler it is really native. There are many instances where distros dumb down their native compiler (and kernel and other binaries) so that that distro runs on a wider range of systems. On an ARMv6 system you might be running a compiler that defaults to ARMv4 for example.

That begs a similar question to your own, if I build the toolchain with one default architecture then specify another is that different that building the toolchain for the target architecture?

Ideally you would hope that a mostly debugged compiler/toolchain would give you the same results whether you were native or cross compiled and independent of the default architecture. Now I have seen on an older llvm that the llvm-gcc when run on a 64 bit host, cross compiling to arm would build all ints as 64 bit adding a lot to the code, same compiler version, same source code on a 32 bit host would give different results (32 bit ints). Basically the -m32 switch did not work for llvm-gcc (at the time), I dont know if that is still the case as I switched to clang when doing llvm work and never looked back at llvm-gcc...llvm/clang for example is mostly a cross compiler all the time, the linker is the only thing that appears to be host specific, you can take an off the shelf llvm and compile for any of the targets on any host system (provided your build didnt disable any of the supported targets of course).

查看更多
Rolldiameter
6楼-- · 2020-04-14 04:32

The only benefit of compiling natively is that you don't have to transfer the program to the target platform as it's already there.

However that is not such a big benefit when considering that most target platforms are massively underpowered compared to a modern x86 PC. The amounts of memory, faster CPU and especially much faster disks makes compilation times many times quicker on a PC. So much so that the advantage of native building isn't really an advantage anymore.

查看更多
登录 后发表回答