I was asked this question today: “How can we ensure that requests hitting our API gateway are coming from our mobile app?” What sort of guarantees are available to API publishers and to API consumers that their traffic is secured? There’s enough out there with the Snowden releases and Heartbleed that security awareness is at an all time high that this question deserves a good bit of thought put into a response.
tl;dr You can’t be 100% sure.
It would seem, on the face of it, that a proper security researcher answer would be “you can’t ensure any sort of security, it’s all effectively post-event fraud detection and mitigation responses,” considering the multiple ways that hackers can insert man in the middle attacks from the recent “Factoring attack on RSA-EXPORT Keys” (FREAK) attack, to malware DNS poisoning, and even idiotic things like a “sign-all” Superfish certificate on one’s machine.
Mobile apps connect to remote servers via web-based APIs and there’re a few different strategies to protect that traffic. The simplest (and most reductionist) way may be to have the client provide some sort of unique identifier, such as an HTTP user agent string. That can easily be spoofed by capturing network traffic and then replicating the user agent string with some other attack software in a very reductionist form of replay attack.
Encryption, while table stakes for providing some sort of security, isn’t going to do it either. TLS (Transport Layer Security), also referred to as SSL v3.1, is a start, but there are and have definitely been flaws and undoubtedly more to come. A standard way that API publishers attempt to guarantee authenticity of the transaction is via an API or app key by assigning a key and secret to the app consuming the API. Where do app developers put this key and secret? Sometimes, they embed it in the mobile app, effectively putting the key and secret out there for anyone to brute force or decompile out of the mobile app binary. Android, iOS, and Cordova are all subject to key extraction. Similarly, attempting to use a private key to sign requests still requires that private key to be on the mobile device, with the same vulnerabilities, including replay attacks once a key’s been generated or extracted.
Here's Cory Doctorow, being more eloquent than I can ever be, on the topic:
So we're chasing an impossibility here. The idea is that we will have a world in which bits can be widely copied with permission and can't be copied at all without permission. And those of you of a technical background, and I assume that is all of you, know this is an impossibility. Bruce Schneier says "making bits harder to copy is like making water that's less wet". There is no future in which bits will get progressively harder to copy. Indeed if bits did get harder to copy, it would be alarming. It would mean some of our critical infrastructure had stopped working ... Barring nuclear catastrophe, from here on in, bits only get easier to copy. And yet we're chasing a future where bits will get progressively harder to copy. [26:30]
Another sort of non-option is to push the security back to the API layer - don’t put keys/secrets on the mobile app, but provide an API proxy, server-side, that holds the keys, secrets or whatever, and then relays the call to the actual target API. Mobile hosters like appery.io do something similar, keeping the API keys on their hosted proxy, and generate a Cordova mobile app that interacts with their servers. With this option, while the app keys aren’t scattered across the world, there’s still an attack surface, but it’s radically minimized. And with the keys in house, there’re more options for security servers and networks within an organization’s control. Additionally, there’s no way to guarantee that it’s the mobile app that’s making a call to the server or some skript kiddie’s hack bot.
All other options get harder to implement and tend to become no-ops for organizations wanting to create a mobile application - creating alternative flows and custom libraries. The best way would be to issue a app key and token to every app (and maybe even session) that interacts with your mobile-exposed API by having a communication handshake that requests, receives and then uses a one-time token. There’s no guarantee that the process of this handshake can’t be figured out by a combination of watching the network traffic and breaking TLS. Lastly, custom libraries that aren’t found in common Android, iOS, or Cordova can be used to obfuscate the communications patterns between mobile device and server. Obfuscation isn’t security, it’s just a roadblock. Roadblocks are worth something when considering security, but they don’t guarantee that the application is a valid application.
Whitelisting all apps is a potential solution, but only a potential one, as it doesn’t scale well.
All of this leads back to mitigation solutions: assume the key will leak and focus on what to do when that happens. Adopting a defensive position is very important, also practically table stakes for being serious about security. Automated monitoring and known-good patterns are hallmarks of classic fraud detection systems. Watching patterns can help organizations indirectly determine whether a connection is coming from a mobile app - expected use patterns can be turned into dynamic operational policies at (currently) great computational expense and (currently) only in a custom manner. Enter big data and machine learning.
No comments:
Post a Comment