Saturday, January 2, 2021

Storing Asymmetric Keys in Client Storage for Login and Enrollment

Very unlocal storage

I'm genuinely conflicted about whether storing private keys in browser storage is OK all things considered, or really wrong. It certainly feels wrong as hell, and this goes back years. With my HOBA revisited post (using asymmetric keys for login and enrollment), I dusted off my ancient prototype code and used WebCrypto to generate keys, and sign enrollment and login requests. Obviously you don't want to re-enroll keys each time you go to login, so that implies that the private keys need to stored in a place that is accessible to the WebCrypto signing code at the very least. To my knowledge, there is no way to get WebCrypto to keep the keys themselves inaccessible on a permanent basis, so that means they need to be exported, stored by the app, and reimported on the next login attempt.

So what is the implication? That you'll be storing those keys in localStorage or IndexedDB. They could be kept in the clear -- say PEM form -- or even wrapped using a password to encrypt the blob for extra security, in which case it would look much like a normal login form even though the password would not be sent to the server. From everything I can tell this is frowned upon, but strikingly not categorically. OWASP says that it is not good form or somesuch to put sensitive information in client storage. They give some rationale such as XSS attacks, but seem to not be able to work up the courage to say "NO! Don't ever do this!" This leads me to believe that they feel the same yucky feelings like I do, but can't quite bring themselves to have a logical reason to say that it's always bad.

Part of the problem is that if you have XSS problems, you are well and truly fucked; browser based storage is a problem, but everything in your entire existence is a problem too with XSS. So to my mind that says just take that rationale off the table because you would never allow that to be an excuse for shitty practices such as SQL injection attacks: it's just not acceptable, full stop. After that, the rationale to not use browser storage gets murky in a real hurry. As far as I can tell, it reduces down to that yuck factor.

Thinking about this some more, browsers can and do store immense amount of sensitive information. That information is available to javascript code running in the browser sandbox too. That sensitive information is from form fill in helpers from the browser. They collect names, passwords, social security numbers, credit cards, your first dog's name... you name it, it has it. Yet people do not think twice about the browser helping out. All of those things are completely available to javascript and any rogue XSS script it allows to run too. Yet I've not heard anybody sound the klaxons about this, and if anybody has nobody seems to be listening. Which is to say, that ship has long since sailed.

So are they any differences between browser form fill in? Yes, but they are subtle and I'm not sure they really change the attack surface much. Browser helpers do give you the ability to say no, don't fill this in, which is definitely a nice feature and enhances security since not handing it over to the DOM means that the client code won't see it. I think, however, that that is a Chimeric victory: if everybody uses this feature because the browser is good at figuring out what needs to be filled in, then the victory is just nibbling around the edges of the actual security problem.

Now to be certain, I think a situation where the browser stores the keys such that they are not visible to the javascript code where the user is given that choice would be great. What I would visualize is that private keys are stored along with all of the rest of the sensitive data the browsers store, but somehow delivered down to the client js code when the user allows it as with form data. The keys could be encrypted with a secret only the browser knows, or maybe the crypto routines in Webcrypto can take a new key type which is really just a reference to a key rather than the key itself. Which is to say that it's just an implementation detail rather than anything fundamental. This would be ideal to really solve this problem. This would, of course, require changes to the browser and definitely standardization. Which is to say, that it would be a long way off, but it definitely possible.
 
The question that remains is a riff on the Saint Augustinian Bargain: give me better security, but not just yet. That is, should we keep chaste until a better solution comes along, or should we make do in the mean time with a less perfect solution. I guess that given what I can tell with the risks, I put myself into the "just not yet" camp. Passwords over the wire are such a huge problem that almost anything would be better. Given that browsers are already exposing sensitive information including passwords, I'm not sure exactly what the big problem is.  The threats are much worse with passwords given their reuse, it seems to me that it is an incremental gain is completely worth it even if it is not the best long term solution. That is to say, that even if I can manage to steal your private key for a site, that gives me exactly no amplification attack unlike reused passwords.

So in conclusion, yes I totally get the ick factor. But the ick is part and parcel of the entire beast, and not just browser storage mechanisms. What that tells me is that one needs to put this in perspective. As with all security, risk is not an absolute and needs to be viewed in context of the alternatives. The alternative in this case is the additional amplification factor of reused passwords. Since all things are otherwise are fairly equal given browser form fill in code, I think that's pretty significant and a good reason to override the "should not put sensitive information in client storage" recommendations. 






No comments:

Post a Comment