Decompilation of (binary,NDK) C apps vs. Java apps

2019-03-14 04:25发布

问题:

Well,

since I'm interested in reengineering I spend a lot of time on Android reengineering so far.

Nevertheless I got to a point, where I had the problem of compiled, binary C-Code (NDK) and I got to know that it's very difficult to decompile it back to C/C++ than decompiling a DEX-file back to more or less well Java sources.

What's the reason for this? I mean the bytecode is executed by the Dalvik VM and in case of a usual binary file it's executed by the real processor directly instead. Both are pretty similar except for some additional emulation layers, isn't it? I don't see that much differences at the moment or the reason for this problem.

Do you have any information for me why it's more difficult to decompile a usual binary file (e.g. ELF or MS EXE) back to the C source?

Thanks.

回答1:

The short answer is that the C/C++ code does not contain any reflective information in it and C/C++ has inline functions, macros, and unrolled loops that the Java compiler just doesn't do (as much as C/C++ compilers do). It is also possible to optimize C/C++ so extensively that all you can do is decompile to assembly because there are no references to the applications own functions. (References to the system's functions will be found though.)



回答2:

BTW, Hex-Rays ARM Decompiler makes reverse-engineering job much easier, check this out: http://www.hex-rays.com/hexarm_compare0.shtml

The other question is that it costs much...