Monday, May 13, 2019

The Apple App Store is Security Theater

Apples are sometimes rotten

 

Money, $$$, Money

From its inception the Apple App store has been about exactly one thing: separating you from your money. That's not exactly news, but Apple being the good marketers they are, decided to put a nice bow on their money turd by claiming that they were going to curate the apps and be super careful, and look out for you, and send roses to your mother on mother's day, and, and... they may do some of those things, and the people who do that curation may in fact be honestly trying to do their job to stop bad guys. But in the end, it's all a bunch of hooey, aka security theater.

The App Store Treadmill

In order to understand this, you have to understand what the process is for developers. In a nutshell, your iPhone, etc will not run an app whose code has not been signed by an Apple cert/key. This blocks any other store from being able to sell apps for the iPhone. There are a few exceptions to this (obviously one that allows developers to develop, but also enterprise apps these days too I hear), but the main line loop is: developer develops app, developer submits app for approval, Apple either approves it and it goes up to the App Store, or they reject it and the developer has to fix whatever they are complaining about. Rinse, repeat. My experience with their rejection criteria was mostly that it was petty and small things that didn't have much if anything to do with security. Others may have different experiences. The huge downside is that if they reject your app for whatever reason, you have to resubmit the app for approval once you're done, and...Wait. A long time. If you were fixing a critical bug -- including a security bug! -- tough noogies.

Now Apple has always been very secretive about what their testing entails. We weren't trying to build an app to probe the surface of their security testing, so it's really hard to say what it might involve. Maybe they do find both malicious and unintentional problems. Maybe they find that a lot. Who knows? But there is a gigantic hole, like you could sail the Titantic through it sized hole, that makes all of that testing complete useless, especially if you're a bad guy.

Using WebViews

When we were building the Phresheez app, it became pretty scary to think about writing and maintaining two different UI's. A common language between the two would be very useful. And as it happens there is: javascript. Both Android and iOS have what are known as webviews (UIWebView was what we used on the iPhone). Webviews are pretty much what they sound like: embedded web browsers that an app can display on the phone. More importantly, the app can communicate with the code running in the webview bidirectionally. This is very handy: we wrote the parts of the app that needed be written natively (mostly the GPS handling stuff), and all of the UI in the webview. Portability problem solved.

There's one other important property about all of this. Since it's just web stuff (html, js, css, etc) the app can either get it locally from the app's assets, or make a request to some server somewhere, or both. So we bundled up all of this stuff up into a zip ball on our backend servers, and the app would know how to go fetch the zip ball and download it into the webview. Remember about all of that waiting for reviews? Problem solved.

The Hole

There is nothing nefarious about what we did. We had perfectly legitimate reasons for doing this, and Apple does not have a policy against this. I'm not sure they could ever have a policy banning that because that's just the way the web works: you can just add script tags in the html that point to a remote server and that's perfectly legitimate. How does this make all of Apple's so-called vetting security theater? Well if it's legitimate to load javascript code from an external server from a good guy, it's legitimate to load it from an external server from a bad guy. And a clever bad guy could even go to the trouble of cleanup their malware while they are in App Store review, and then switch back to the evil code after.

Wouldn't that be against Apple's policies? You bet. But they would be none the wiser until after the app was deployed. Which is the exact same situation that Apple fanboys love to make fun of Android. You put your evil stuff in javascript, run the webview, PROFIT! Note: the webview doesn't even have to be part of your UI... it's just a vehicle to run javascript if that's all you want to do.

The Pooch, She is Screwed

This feature has been in iOS from the very beginning, and there are tons of hybrid web/native app packages out there. To close this hole would be to break every single one of them, and for that matter probably break every app that uses a webview at all. So they realistically can't do that, and even if they did it would be catastrophic. Apple really doesn't need to spend much if any resources behind the curtain to preemptively root out bad guys. Any bad guy worth their salt would already know this trick. If Apple does spend lots of resources, it's just a marketing expense.

This is why I've been saying that so-called vetting is all a bunch of security theater. This was always a marketing thing and nothing more.

Wednesday, April 3, 2019

Don't Eat Webauthn, it's Made of HOBA's!

So I recently heard about a new W3C standard called Webauthn. From what I can tell it's very much in the spirit of our HOBA RFC (RFC 7340). Ours was more of a sketch of what could be, and my javascript implementation of one of the alternative ideas in the RFC was a truly horrible hack and I knew it, even if I did try to justify some of the sketchier aspects of it. But it was 2012 and the working group didn't even spin up till 2015, so we were clearly ahead of our time.

It works pretty much like my javascript implementation, but with much better crypto, credential storage, etc, etc. But it also goes on to specify interfaces to signing dongles which is good for what it's trying to do, but I'm seriously worried that people are going to think that webauthn *requires* portable keys, biometric, etc. That would be a shame because the real enemy is not passwords, per se, it's passwords that are transmitted over the wire. And of course, their reuse. A single strong local password which is never put on a wire is perfectly fine to gain access to credentials. That's pretty much what we do today with password managers, browser fill-in-the-blanks, etc.

The reason I think that's bad is because I worry that site owners doing their own authentication will view it as niche if they think it's required to have something other a local password. Why would Epicurious implement this if they thought it was just going to be the slice of people who have access to these things? Remember, not everything is done on phones so counting on, say, biometrics only works for some of the use cases. That would be a very bad result, as webauthn using a local credential store password is a perfectly fine thing to keep others out of my super secret recipe box.

One of the things I did with my part of HOBA is elaborated quite a bit on the enrollment problem. On the tutorial sites I found, nobody seemed to mention it. I can definitely understand why the standard itself considers that out of scope (it is), but it's a little odd that it's not getting talked about much that I've found because it's probably the hardest part of deploying webauthn server-side. Maybe my Google foo failed me, but dealing with enrollment always sucks. Since webauthn is device specific, it's going to take some guidance on how to implement it server side, and users getting used to a new routine. Fortunately on the user side, we've come a long way on enrollment with more and more sites validating that the use of a new device is ok. With webauthn, that would have to be universal.

All in all, I think this is great stuff and about time. The scourge of passwords on the wire has a real chance of going away in time as sites adopt it. Let's just not hitch it to getting rid of passwords altogether. All forms of credentials have their own set of issues and tradeoffs. The important part is keeping them off the wire.

Stir/Shaken Questions

I've been looking over STIR/SHAKEN stuff for dealing with Spam-o-SIP. There's a really nice problem statement (rfc 7340) which outlines all of the interactions with the legacy PSTN and its interactions with SIP gateways and session border controllers (SBC). The gist of the resulting set of RFC's presents a pretty complicated architecture with Policy Administrators (PA) (I assume something like the ITU or some other such creature) which does admission control of who is allowed to join the telephone number bearing club, delegated CA's to sign club member's certs, and then a sort of peculiar over-the-wire signing and verifying service within each provider. From my understanding this is all within a SIP network -- anything beyond a PSTN gateway is terra incognita if the gateway switch is relaying it from a source it does not control. But that's just a previously unsolved technical problem, I think, and the layer 8 solutions from the past may still be effective, re: Caller ID. Now I must fess up that I only recently heard about this, so my characterizations are likely to have flaws or be downright wrong, so take this with a grain of salt.

Plain Old SIP

My first question is whether this is intended to be a solution for plain old SIP URI's as well? If so, it's sort of worrisome to have his PA entity determining who can join the club for an all-IP service. Binding telephone numbers to particular carriers and making certain that the carrier is part of the telephone number club is a much different problem space than SIP-SIP with no telephone numbers involved. Assuming that is not intended to solve for that problem, is there a way to identify either the end user URI, or aggregated at provider boundaries on, say, a domain basis? With DKIM, we chose to allow mail providers to sign the mail with their domain -- whether or not they are the originating domain -- to say "blame me".  This gives the originating provider some incentive not to be blamed by requiring MUA authentication, etc.

RFC 7340 talks about SIP Identity (rfc 4474) and I can certainly understand the issue with B2BUA's which have analogs in the email universe (cf mailing lists). With DKIM we essentially punted and declared that the breaking gateway should sign the mail anew, and that they are now the one to blame. This seems to have worked out ok, as far I can tell. Dealing with spam is all about messy heuristics and I expect that SIP spam will be no different. The general idea is to be able get enough clues about bad actors as part of the full set of clues.

DKIM for SIP?

It has always seemed that a DKIM like approach would work for SIP as well. It has the nice property that it doesn't require a centralized CA. I'm too lazy to look up SIP identity, but I'm guessing that it does require them in some form. The web got lucky with TLS because there were only a very few browsers and the accepted CA root list was controlled by them before things could get out of control. With SIP, it's been around a long time and SIP Identity didn't get much traction from what I read, so I'm not sure if that's part of the problem, or is just academic since it just didn't get deployed.

DKIM for SIP would be fairly trivial to implement. In fact, I hacked together a SIP proxy that signed and verified SIP messages. And, of course, the infrastructure is all there as DKIM at this point is really old and really widely deployed. If SIP Identity doesn't get used for whatever reason, and we don't have a deployed solution for SIP-SIP calls, that seems like a really big hole. Fortunately, we have an existence proof with email which could be retrofitted pretty easily.

Is this Partly a UI Problem?

When I receive a call, my phone does the best it can to tell me who it is rather than giving me just a plain old telephone number. In fact, that's one of the best ways to determine whether it's spam is if you don't recognize the number. At this point, we are all very comfortable using email addresses as identities. So what I'm getting at is why doesn't my phone app take plain old SIP calls and treat them equivalently? In which case, it can use the From: address rather than some Paleolithic telephone number. There's nothing stopping Google or Apple from adding a new SIP protocol head onto their existing call apps, after all. This would at least break the notion that legacy telephone numbers are the only ID I should see when I get an incoming call. Hopefully the legacy PSTN calling will be winding down in the next 10 years or so, but we certainly don't want to carry along legacy telephone anachronisms into the future of an all SIP infrastructure. 

Signer and Verifier Services

This is more of a quibble, but I really don't understand why there needs to be a separate on-the-wire protocol for SIP proxies and authentication. With DKIM, the signing and verifying are done in the MTA, and it demonstrably works just fine. The CPU cost of signing these days is minuscule, and verification was always nothing. Plus, it costs time to send something over the wire and get a response, and with SIP that is a consideration. So I just don't get what motivated this.

Conclusion

While I think this looks like good work, I'm worried that it's putting a lot of time and energy into dealing with a legacy issue, and perhaps either not dealing with the SIP-SIP case, or dealing with it in a way that requires being invited to a club.  Either would be very bad. We really should be trying as hard as possible to make plain old SIP the way people expect to get a "phone call", and SIP itself needs to be able to have a workable solution that doesn't depend in any way on legacy PSTN stuff or we're going to be in the same boat a few years down the line.