Monday, May 13, 2019

The Apple App Store is Security Theater

Apples are sometimes rotten

 

Money, $$$, Money

From its inception the Apple App store has been about exactly one thing: separating you from your money. That's not exactly news, but Apple being the good marketers they are, decided to put a nice bow on their money turd by claiming that they were going to curate the apps and be super careful, and look out for you, and send roses to your mother on mother's day, and, and... they may do some of those things, and the people who do that curation may in fact be honestly trying to do their job to stop bad guys. But in the end, it's all a bunch of hooey, aka security theater.

The App Store Treadmill

In order to understand this, you have to understand what the process is for developers. In a nutshell, your iPhone, etc will not run an app whose code has not been signed by an Apple cert/key. This blocks any other store from being able to sell apps for the iPhone. There are a few exceptions to this (obviously one that allows developers to develop, but also enterprise apps these days too I hear), but the main line loop is: developer develops app, developer submits app for approval, Apple either approves it and it goes up to the App Store, or they reject it and the developer has to fix whatever they are complaining about. Rinse, repeat. My experience with their rejection criteria was mostly that it was petty and small things that didn't have much if anything to do with security. Others may have different experiences. The huge downside is that if they reject your app for whatever reason, you have to resubmit the app for approval once you're done, and...Wait. A long time. If you were fixing a critical bug -- including a security bug! -- tough noogies.

Now Apple has always been very secretive about what their testing entails. We weren't trying to build an app to probe the surface of their security testing, so it's really hard to say what it might involve. Maybe they do find both malicious and unintentional problems. Maybe they find that a lot. Who knows? But there is a gigantic hole, like you could sail the Titantic through it sized hole, that makes all of that testing complete useless, especially if you're a bad guy.

Using WebViews

When we were building the Phresheez app, it became pretty scary to think about writing and maintaining two different UI's. A common language between the two would be very useful. And as it happens there is: javascript. Both Android and iOS have what are known as webviews (UIWebView was what we used on the iPhone). Webviews are pretty much what they sound like: embedded web browsers that an app can display on the phone. More importantly, the app can communicate with the code running in the webview bidirectionally. This is very handy: we wrote the parts of the app that needed be written natively (mostly the GPS handling stuff), and all of the UI in the webview. Portability problem solved.

There's one other important property about all of this. Since it's just web stuff (html, js, css, etc) the app can either get it locally from the app's assets, or make a request to some server somewhere, or both. So we bundled up all of this stuff up into a zip ball on our backend servers, and the app would know how to go fetch the zip ball and download it into the webview. Remember about all of that waiting for reviews? Problem solved.

The Hole

There is nothing nefarious about what we did. We had perfectly legitimate reasons for doing this, and Apple does not have a policy against this. I'm not sure they could ever have a policy banning that because that's just the way the web works: you can just add script tags in the html that point to a remote server and that's perfectly legitimate. How does this make all of Apple's so-called vetting security theater? Well if it's legitimate to load javascript code from an external server from a good guy, it's legitimate to load it from an external server from a bad guy. And a clever bad guy could even go to the trouble of cleanup their malware while they are in App Store review, and then switch back to the evil code after.

Wouldn't that be against Apple's policies? You bet. But they would be none the wiser until after the app was deployed. Which is the exact same situation that Apple fanboys love to make fun of Android. You put your evil stuff in javascript, run the webview, PROFIT! Note: the webview doesn't even have to be part of your UI... it's just a vehicle to run javascript if that's all you want to do.

The Pooch, She is Screwed

This feature has been in iOS from the very beginning, and there are tons of hybrid web/native app packages out there. To close this hole would be to break every single one of them, and for that matter probably break every app that uses a webview at all. So they realistically can't do that, and even if they did it would be catastrophic. Apple really doesn't need to spend much if any resources behind the curtain to preemptively root out bad guys. Any bad guy worth their salt would already know this trick. If Apple does spend lots of resources, it's just a marketing expense.

This is why I've been saying that so-called vetting is all a bunch of security theater. This was always a marketing thing and nothing more.

1 comment:

  1. I have no software knowledge at this level of understanding, but I do recognize Apple's marketing 'of we are taking care of all things for you' is BS.

    ReplyDelete