How to avoid reverse engineering of an APK file?

2018-12-31 02:53发布

I am developing a payment processing app for Android, and I want to prevent a hacker from accessing any resources, assets or source code from the APK file.

If someone changes the .apk extension to .zip then they can unzip it and easily access all the app's resources and assets, and using dex2jar and a Java decompiler, they can also access the source code. It's very easy to reverse engineer an Android APK file - for more details see Stack Overflow question Reverse engineering from an APK file to a project.

I have used the Proguard tool provided with the Android SDK. When I reverse engineer an APK file generated using a signed keystore and Proguard, I get obfuscated code.

However, the names of Android components remain unchanged and some code, like key-values used in the app, remains unchanged. As per Proguard documentation the tool can't obfuscate components mentioned in the Manifest file.

Now my questions are:

  1. How can I completely prevent reverse engineering of an Android APK? Is this possible?
  2. How can I protect all the app's resources, assets and source code so that hackers can't hack the APK file in any way?
  3. Is there a way to make hacking more tough or even impossible? What more can I do to protect the source code in my APK file?

30条回答
心情的温度
2楼-- · 2018-12-31 03:30

I suggest you to look at Protect Software Applications from Attacks. It's a commercial service, but my friend's company used this and they are glad to use it.

查看更多
后来的你喜欢了谁
3楼-- · 2018-12-31 03:30

APK signature scheme v2 in Android N

The PackageManager class now supports verifying apps using the APK signature scheme v2. The APK signature scheme v2 is a whole-file signature scheme that significantly improves verification speed and strengthens integrity guarantees by detecting any unauthorized changes to APK files.

To maintain backward-compatibility, an APK must be signed with the v1 signature scheme (JAR signature scheme) before being signed with the v2 signature scheme. With the v2 signature scheme, verification fails if you sign the APK with an additional certificate after signing with the v2 scheme.

APK signature scheme v2 support will be available later in the N Developer Preview.

http://developer.android.com/preview/api-overview.html#apk_signature_v2

查看更多
何处买醉
4楼-- · 2018-12-31 03:31

First rule of app security: Any machine to which an attacker gains unrestricted physical or electronic access now belongs to your attacker, regardless of where it actually is or what you paid for it.

Second rule of app security: Any software that leaves the physical boundaries inside which an attacker cannot penetrate now belongs to your attacker, regardless of how much time you spent coding it.

Third rule: Any information that leaves those same physical boundaries that an attacker cannot penetrate now belongs to your attacker, no matter how valuable it is to you.

The foundations of information technology security are based on these three fundamental principles; the only truly secure computer is the one locked in a safe, inside a Farraday cage, inside a steel cage. There are computers that spend most of their service lives in just this state; once a year (or less), they generate the private keys for trusted root certification authorities (in front of a host of witnesses with cameras recording every inch of the room in which they are located).

Now, most computers are not used under these types of environments; they're physically out in the open, connected to the Internet over a wireless radio channel. In short, they're vulnerable, as is their software. They are therefore not to be trusted. There are certain things that computers and their software must know or do in order to be useful, but care must be taken to ensure that they can never know or do enough to cause damage (at least not permanent damage outside the bounds of that single machine).

You already knew all this; that's why you're trying to protect the code of your application. But, therein lies the first problem; obfuscation tools can make the code a mess for a human to try to dig through, but the program still has to run; that means the actual logic flow of the app and the data it uses are unaffected by obfuscation. Given a little tenacity, an attacker can simply un-obfuscate the code, and that's not even necessary in certain cases where what he's looking at can't be anything else but what he's looking for.

Instead, you should be trying to ensure that an attacker cannot do anything with your code, no matter how easy it is for him to obtain a clear copy of it. That means, no hard-coded secrets, because those secrets aren't secret as soon as the code leaves the building in which you developed it.

These key-values you have hard-coded should be removed from the application's source code entirely. Instead, they should be in one of three places; volatile memory on the device, which is harder (but still not impossible) for an attacker to obtain an offline copy of; permanently on the server cluster, to which you control access with an iron fist; or in a second data store unrelated to your device or servers, such as a physical card or in your user's memories (meaning it will eventually be in volatile memory, but it doesn't have to be for long).

Consider the following scheme. The user enters their credentials for the app from memory into the device. You must, unfortunately, trust that the user's device is not already compromised by a keylogger or Trojan; the best you can do in this regard is to implement multi-factor security, by remembering hard-to-fake identifying information about the devices the user has used (MAC/IP, IMEI, etc), and providing at least one additional channel by which a login attempt on an unfamiliar device can be verified.

The credentials, once entered, are obfuscated by the client software (using a secure hash), and the plain-text credentials discarded; they have served their purpose. The obfuscated credentials are sent over a secure channel to the certificate-authenticated server, which hashes them again to produce the data used to verify the validity of the login. This way, the client never knows what is actually compared to the database value, the app server never knows the plaintext credentials behind what it receives for validation, the data server never knows how the data it stores for validation is produced, and a man in the middle sees only gibberish even if the secure channel were compromised.

Once verified, the server transmits back a token over the channel. The token is only useful within the secure session, is composed of either random noise or an encrypted (and thus verifiable) copy of the session identifiers, and the client application must send this token on the same channel to the server as part of any request to do something. The client application will do this many times, because it can't do anything involving money, sensitive data, or anything else that could be damaging by itself; it must instead ask the server to do this task. The client application will never write any sensitive information to persistent memory on the device itself, at least not in plain text; the client can ask the server over the secure channel for a symmetric key to encrypt any local data, which the server will remember; in a later session the client can ask the server for the same key to decrypt the data for use in volatile memory. That data won't be the only copy, either; anything the client stores should also be transmitted in some form to the server.

Obviously, this makes your application heavily dependent on Internet access; the client device cannot perform any of its basic functions without proper connection to and authentication by the server. No different than Facebook, really.

Now, the computer that the attacker wants is your server, because it and not the client app/device is the thing that can make him money or cause other people pain for his enjoyment. That's OK; you get much more bang for your buck spending money and effort to secure the server than in trying to secure all the clients. The server can be behind all kinds of firewalls and other electronic security, and additionally can be physically secured behind steel, concrete, keycard/pin access, and 24-hour video surveillance. Your attacker would need to be very sophisticated indeed to gain any kind of access to the server directly, and you would (should) know about it immediately.

The best an attacker can do is steal a user's phone and credentials and log in to the server with the limited rights of the client. Should this happen, just like losing a credit card, the legitimate user should be instructed to call an 800 number (preferably easy to remember, and not on the back of a card they'd carry in their purse, wallet or briefcase which could be stolen alongside the mobile device) from any phone they can access that connects them directly to your customer service. They state their phone was stolen, provide some basic unique identifier, and the account is locked, any transactions the attacker may have been able to process are rolled back, and the attacker is back to square one.

查看更多
余生无你
5楼-- · 2018-12-31 03:31

If we want to make reverse engineering (almost) impossible, we can put the application on a highly tamper-resistant chip, which executes all sensitive stuff internally, and communicates with some protocol to make controlling GUI possible on the host. Even tamper-resistant chips are not 100% crack proof; they just set the bar a lot higher than software methods. Of course, this is inconvenient: the application requires some little USB wart which holds the chip to be inserted into the device.

The question doesn't reveal the motivation for wanting to protect this application so jealously.

If the aim is to improve the security of the payment method by concealing whatever security flaws the application may have (known or otherwise), it is completely wrongheaded. The security-sensitive bits should in fact be open-sourced, if that is feasible. You should make it as easy as possible for any security researcher who reviews your application to find those bits and scrutinize their operation, and to contact you. Payment applications should not contain any embedded certificates. That is to say, there should be no server appliaction which trusts a device simply because it has a fixed certificate from the factory. A payment transaction should be made on the user's credentials alone, using a correctly designed end-to-end authentication protocol which precludes trusting the application, or the platform, or the network, etc.

If the aim is to prevent cloning, short of that tamper-proof chip, there isn't anything you can do to protect the program from being reverse-engineered and copied, so that someone incorporates a compatible payment method into their own application, giving rise to "unauthorized clients". There are ways to make it difficult to develop unauthorized clients. One would be to create checksums based on snapshots of the program's complete state: all state variables, for everything. GUI, logic, whatever. A clone program will not have exactly the same internal state. Sure, it is a state machine which has similar externally visible state transitions (as can be observed by inputs and outputs), but hardly the same internal state. A server application can interrogate the program: what is your detailed state? (i.e. give me a checksum over all of your internal state variables). This can be compared against dummy client code which executes on the server in parallel, going through the genuine state transitions. A third party clone will have to replicate all of the relevant state changes of the genuine program in order to give the correct responses, which will hamper its development.

查看更多
唯独是你
6楼-- · 2018-12-31 03:33

As someone who worked extensively on payment platforms, including one mobile payments application (MyCheck), I would say that you need to delegate this behaviour to the server, no user name or password for the payment processor (whichever it is) should be stored or hardcoded in the mobile application, that's the last thing you want, because the source can be understood even when if you obfuscate the code.

Also, you shouldn't store credit cards or payment tokens on the application, everything should be, again, delegated to a service you built, it will also allow you later on, be PCI-compliant more easily, and the Credit Card companies won't breath down your neck (like they did for us).

查看更多
人间绝色
7楼-- · 2018-12-31 03:33

Tool: Using Proguard in your application it can be restricted to reverse engineering your application

查看更多
登录 后发表回答