There’s an app for everything, and hackers and thieves are taking advantage. What are enterprises doing about it? Not enough.
Web and mobile application use has exploded in recent years as businesses have digitized and moved more of their operations to the cloud, and as the number of mobile devices has proliferated. Application breaches have increased commensurately, and show no signs of slowing—unless developers change the way they build and secure these apps.
Everyone, it seems, has at least one smartphone, but we don’t spend much time talking on them. The 3+ billion smartphone users worldwide downloaded more than 200 billion apps in 2019, and reportedly spend almost all their phone time using those apps. Android users’ in-app hours reportedly grew 20 percent in the first quarter of 2020 alone.
Securing mobile devices against malware, ransomware, and other intrusions has taken top priority with manufacturers, while the applications on those devices get short shrift. But cybercriminals, it turns out, like to use apps, too—as a portal for access to our devices, accounts, and data.
Particularly hard hit have been mobile and website financial technology, healthcare, and entertainment apps, including streaming and video games. Akamai recently reported nearly 10 billion incidents of gaming-application “credential stuffing” and 152 million attacks on gaming web applications in 2018-2020.
But the problem isn’t limited to one or a few sectors. Across the spectrum, applications are not, for the most part, secure. Various studies report findings of
- Security flaws in 83% of all apps
- Data leaks in 70% of popular Android applications and 92% of web retail apps
- Insecure data storage in 76% of apps
- Serious vulnerabilities in 71% of the top financial apps and 71% of healthcare apps
Given these grim statistics, the real surprise may be that we haven’t seen more application breaches. We most certainly will, however—unless organizations and their developers get serious about designing security features in-app.
The app insecurity complex
Breaches cost money—potentially lots of money—and, in some instances, lives. One study estimates an average data breach cost of $3.86 million. But what price do we put on the lives risked when a recent Universal Health Systems breach caused its hospitals to re-route ambulances and cancel surgeries?
Rapid digitization seems to be one reason why applications are getting released before they’re properly secured. The trend toward “agile” models in software development, in which apps and updates get continually released on a 24/7 cycle, is said to favor speed over security. The shift to cloud technologies plays a role, as well. Securing network perimeters no longer suffices—because there is no perimeter. Apps reside in the cloud, and they communicate with one another and with a growing array of sensors and devices via the ever-expanding internet-of-things (IoT). Unsecure apps means unsecure devices, networks, and systems, as hackers use apps as a portal for access to the rest.
There is a solution to the app insecurity complex: in-app protection, or embedding security features into applications while they are in design. Gartner estimates that, by 2022, half or more of successful “clickjacking” and mobile app attacks will be deemed to have been preventable had encryption and other security features been incorporated in-app, before deployment.
Most common forms of app attack
The most prevalent app weakness is access control. Data leaks and insecure configuration are other common vulnerabilities. Cybercriminals know how to exploit these vulnerabilities to
- Create fake apps or clones of existing apps via reverse engineering to trick users into providing credentials and other sensitive data as well as access to accounts. This is also known as “tampering.”
- Install bots to launch attacks on websites and perform online betting and other transactions
- Install malware on the device or on others in its network. In the infamous WhatsApp malware injection breach, attackers exploited a VOIP (voice over internet protocol) vulnerability in the mobile app that allowed them to inject malware into phones simply by calling them.
- Skim credit-card information
- Inject malicious scripts for clickjacking and formjacking
- Provide access to sensitive stored data – via the device’s operating system, the development framework, cookies and preferences, and other avenues for attack
- Eavesdrop on API communications to steal the data in transit – also known as a “man in the middle” attack
But there are a number of effective ways to secure applications against intrusion, manipulation, and theft.
In-app protection techniques
Securing your business and consumer applications—and the access to accounts and stored data they harbor—involves planning, foresight, and, yes, coding. When an app will handle very valuable data, or will typically run in insecure environments such as on consumer devices, in-app protection is your safest bet for securing it. Developers have many in-app protection techniques from which to choose; some are highly effective, while others are less so. Typically, they’re divided into two camps: prevention and detection. The first type aims to stop breach attempts from succeeding, and those in the second camp detect and react to intrusions after they have occurred.
Here are some of the most commonly used in-app protection techniques.
a) Code obfuscation: “Obfuscate” means “to hide,” something hackers are already good at as they try to prevent viruses and malware they have installed on systems from being analyzed, by hiding behind a “curtain” of junk code. Developers use obfuscation to scramble application code and render it, to the inexperienced eye, unreadable. As a result, static analysis and reverse engineering become much more difficult. Obfuscation techniques include renaming software components and identifiers, inserting useless “dummy” code as a diversion, breaking up the logical structure of the code, introducing software layers of access indirection, and stripping the app’s low-level functions of certain key components. It is also not uncommon to use encryption to hide parts that provide hints about their functionality (such as strings) too easily.
b) Encryption & TLS certificate pinning: Modern encryption algorithms turn code (or text) into “secret” code, or “ciphertext,” that no one can read without the matching key. Encryption can be used to protect data when it’s in storage, in transit, and moving among various networks, in databases, server files, hard drives, emails, and more. For data in transit, the most-widely used encryption protocol today is Transport Layer Security (TLS), which provides confidentiality, integrity, authenticity, and availability services online. In order to ensure authenticity of the communicating parties, TLS relies on the usage of certificates. Typically, a certificate presented by a server contains an asymmetric public key, key and authenticity information on the identity of the server, both signed by a trusted third party. Other communicating parties could then use the public key to encrypt messages directed to a particular server. Mathematical properties guarantee that without the possession of a secret private key matching the public key it is impossible to decrypt protected data in transit. To eavesdrop on encrypted data flowing between a mobile application and its corresponding server back end on the internet, hackers issue fake certificates containing a public key where the private key is known to them. But because information contained in a certificate is signed by a trusted third party, a fake certificate cannot simply be presented without the mobile app detecting the attack. With mobile apps, where the goal is to exfiltrate client-server communication and gain information on the API used by the server's backend, hackers found an easier way to get the signature they need: They add themselves to the list of trusted third parties maintained by their mobile phone's operating system. “Pinning,” or limiting the types of certificates an app will accept, restricts which certificates are considered valid, and rejects those not on a list carefully curated by a hardened mobile application.
a) Dynamic analysis detection: This approach embeds a security system into an application so it can judge whether its current execution environment is controlled by an attacker. Dynamic analysis detection identifies dynamic reverse engineering attempts using debuggers, emulators, binary instrumentation engines (frida), or hooking (Cydia substrate) engines. These mechanisms detect anomalous environments and breach attempts in real time and can wipe the app’s user data, keys, and other information to deter attacks. They may also terminate the app itself, and alert the administrator to the incident.
b) Privilege escalation detection: Typically, mobile phones allow their users to interact with them in limited ways; anything beyond normal apps use is usually out of reach. For example, each app on any modern mobile OS maintains its own view of the file system on the device. Files of other apps are inaccessible without explicit permission from the user. Similarly, the mobile operating system does not allow one app to trace interactions with another one running on the same device. However, the ability to write an application capable of tracing a targeted victim application on the phone might be of value to an attacker who wants to find out about the app’s inner mechanics. Therefore, attackers regularly break into devices in their own possession—a process usually referred to as “jailbreaking” or “rooting,” depending on whether we are in the iOS or Android world. On a rooted device the user (attacker) can modify the mobile operating system to be able to trace the actions of all applications installed on the phone. A jailbroken or rooted device usually indicates that some high-privileged process not approved by the vendor is present, invalidating assumptions about the security model of the operating system (such as, for example, process isolation). Protected applications become aware of systems where privilege escalation attacks had been carried out and can react accordingly—usually by refusing to operate.