>The reason the login form is delivered as web content is to increase development speed and agility
You saved some sprints but invalidated the purpose of the project. Very agile.
>Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.
The safety of the default installation is crowdsourced across all users and can't be targeted. The safety of the JS I load from Mozilla is not and I would have to verify its safety every time. Unless I'm misunderstanding something it can never be as trustworthy.
A more complete comment from rfk on the bug tracker:
> The reason the login form is delivered as web content is to increase development speed and agility. You're right that web content has a larger potential attack surface than code that's built into the browser, but using web content also brings other kinds of security benefit that may not be obvious. That agility meant that during the incident in [1] we were able to respond quickly and effectively to protect users data, and to roll out an updated login flow containing an email confirmation loop. It means that when we ship two-factor authentication over the coming weeks, it will be immediately available to all users on all platforms. It means we can address Bug 1320222 in a single place and be confident we won't lock out older devices. And it means we can easily bring new Firefox apps like Lockbox into the Firefox Accounts ecosystem.
> Our approach has been to embrace the benefits of web content while trying to reduce the potential attack surface as much as possible. That includes some simple things like hosting the web content on its own server to reduce exposure to application server bugs, and shipping default HSTS and HPKP settings for the accounts.firefox.com domain. It also includes some in-browser measures to prevent interference with FxA web content, such as (the currently private) Bug 1415644. As a future step I'd like to see us implement content-signing for accounts.firefox.com and have it enforced by the browser, following the example of things like Bug 1437671.
> Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.
"Every time" for this use case is once per browser install, at the moment you perform the authentication with Firefox Sync, which is the same as the number of times you'd want to verify the binary right before authenticating.
The tradeoff they made here has essentially zero impact on the number of times you need to verify their code, it's just a matter of whether you'd have to verify browser native authentication code or authentication code delivered through a website written in JS, at the moment you authenticate.
A concern like the one raised in this thread is certainly valid for websites that have expiring sessions, where you can switch accounts and log in and out of. And we certainly do need better tools around signature verification and version pinning for websites like we do for binaries (content-addressed networks like IPFS may have good answers there).
But for this use case, it's not a practical concern by any measure, and all this alarmism seems really misdirected.
You're still not addressing the ease with which a targeted attack can be directed at a single user.
In order to compromise firefox native code, they would have to compile malicious code and ship it to everyone. My distro maintainers would need to include the malicious binary in their repos, including a signed hash of the compromised binary, and I'd need to install it, where my package manager would verify the hash.
In order to compromise a single user's browser session, they'd simply need to fingerprint the user's browser and then serve them different content than everyone else gets. No hashes or signatures on javascript, no safety in numbers, etc.
if someone is using a package-manager that uses code signing then indeed, the binary is harder to attack than the JS. (only because the package-manager would need to collude).
However, a lot of people get their software from downloaded .exe's or auto-upgrading installations. For them, JS or binary are equally vulnerable. (All it takes is a mozilla signature)
Besides, it is undeniably better to only be vulnerable to an active attack from mozzila, than to be vulnerable against a passive attack from them.
Most distributions disable auto-upgrade in Firefox, for many reasons (security and auditability being one of the main ones) so you won't get auto-upgrade from a distribution.
And even the "download .exes from the internet" usecase is precisely as secure as downloading JS from the internet that is verified once per install. To attack someone who has an auto-updating Firefox and downloaded it from the internet, you need to intercept and attack TLS -- but only when the upgrade happens which is a fairly limited opportunity. The JS attack has the exact same properties if the above comment (that it only gets downloaded once per install) is true.
So therefore it is strictly less secure in the optimal case, and it is no more secure in the sub-optimal case. So security really isn't a strong argument (the real argument is that it allows for more "agile development" -- which is an understandable argument if you cop to that being the only reason for such a design).
If you can attack TLS, game is over, you can't trust anything. A huge majority of Firefox users use built-in update mechanisms. Making life harder for majority of users to improve security of the selected few is a questionable decision. And if you're really insisting, you can always install some addon which will calculate hashsum of JavaScript resources.
> Making life harder for majority of users to improve security of the selected few is a questionable decision
I agree in theory, though as an aside this isn't true for distribution packages because usually they are GPG signed with keys that are in TPMs on the release machines. Of course any other internet communication relies on TLS not being broken.
But another attack would be modifying one of Firefox's mirrors to host malicious Firefox (not a TLS attack but an attack of a specific mirror). GPG detached signatures for distribution packages protect against this and many other such problems (obviously some attacks against the build servers of a distribution would be problematic, but the same applies for any software project).
Though to be fair, I don't know if Firefox's auto-updater uses an update system like TUF or a distribution-style update system (which is mostly equivalent in terms of security) which would protect against these sorts of problems.
> Making life harder for majority of users to improve security of the selected few is a questionable decision.
I don't understand how logins being built-in to the browser is making life harder for the majority of users. It wouldn't make a difference to them. It would make a difference to the development team, but one could easily argue that the development team should be willing to make life slightly harder for themselves in order to make Firefox users more secure.
> So therefore it is strictly less secure in the optimal case, and it is no more secure in the sub-optimal case. So security really isn't a strong argument
I agree. I was arguing for having some form of e2e encryption (like Firefox currently has) as opposed to not having e2e encryption. I wanted to argue against the idea that, because the e2e was implemented in JS, one might as well not have it.
Then, regarding the gap between e2e in JS vs e2e in binary, my point was that JS is just as good in most cases.
> Most distributions disable auto-upgrade in Firefox, for many reasons (security and auditability being one of the main ones) so you won't get auto-upgrade from a distribution.
Does that mean that the code is only signed by the package distributor, and not mozzilla? Because in that case, the package manager becomes a single point of failure. Then again, I guess that is always the case.
Still, it would be weird that, as far as mozzilla trust goes, a signed exe from internet is better than a signed package from your preferred package manager.
In openSUSE our build system can be configured to auto-check the signatures of the source archives used for building, so you can check the builds to make sure that we are building from an official source releases (assuming the GPG detach-sign their source tarballs -- something I recommend any software release manager do).
But most distributions do their own builds, and without reproducible builds being universally available -- not to mention that distributions usually have minimal compiler hardening flag requirements as well as patches in some cases -- you couldn't reuse signatures for the final binary. Also the entire package is getting signed, so the binary's signature wouldn't be sufficient (and checking it on-install would be quite complicated as well).
> Still, it would be weird that, as far as Mozilla trust goes, a signed exe from internet is better than a signed package from your preferred package manager.
I think that has always been the general case, since distributions are an additional layer of people maintaining a downstream copy of a project. But don't get me wrong, most distributions have processes that allow you to verify that the source used for builds of large projects like Firefox are built using the real sources.
There's also usually several layers of human and automated review before a package upgrade can actually land in a distribution.
The vast majority of Firefox users receive updates from Mozilla via the auto-update mechanism, which would also be vulnerable to the compromise in a similar way.
(A Linux distribution could also be compromised and used in a targeted way of course)
>> then serve them different content than everyone else gets
To help my understanding, to achieve an attack like this, would the attacker need to circumvent SSL on the client, or takeover the script serving web server? Or is there another attack vector that I'm not seeing?
The attacker in this case would be Mozilla itself. No need for an MITM. In this hypothetical, a government agency contacts Mozilla and says "Here is a canvas/HSTS/other fingerprint. Please serve this malicious code when this fingerprint accesses the login."
The point is that Mozilla can single out individual users for targeted attacks, whereas they could not do that if they had to put the malicious code into Firefox itself.
Right I see. So the barrier with Firefox itself, is that the malicious code wouldn't get built into the product and served as an update. However, in that scenario, Firefox could serve a malicious update to a single user, only that it's harder to fingerprint that.
With attack vectors it's also about ease of exploitation. In this case, the ease is high. If the person you are responding to compiles their own browser, the bar to put an exploit in there is already much higher. Yes, there are still attack vectors. And there always will be. The point is they're harder to access.
Your initial comment was pretty adamant that Mozilla had really messed up by delivering the code as JS. However, what is the attack vector that they've introduced by taking this approach?
It sounds to me like you're referring to a man-in-the-middle style attack. However, to be best of everyone's current knowledge, that's simply not possible with SSL.
It's only possible if the attack vector includes having already compromised the user's computer and installed a root certificate. At which point this is all pretty moot.
I think you have me confused with someone else. I have made no points except the ones in the post you are responding to.
In this case it looks like you're missing the fact that you can change the JS on the server with a high amount of ease and a low discoverability (it can be changed just for you and it won't show anywhere else).
> I think you have me confused with someone else. I have made no points except the ones in the post you are responding to.
My apologies, that's what I get for reading on my mobile.
> In this case it looks like you're missing the fact that you can change the JS on the server with a high amount of ease and a low discoverability (it can be changed just for you and it won't show anywhere else).
You raise a reasonable point. It is indeed something everyone should be aware of. It's mostly a matter of trust, not security.
However, the same is equally true of someone you trust changing the binaries, source and/or hashes that are delivered to you; whether you got those from Mozilla, or somewhere else.
I agree that we don’t currently know of easy attacks on SSL if you’re pinning certs (which it sounds like Mozilla does here). But all you need is a rogue CA to MITM SSL if you’re not pinning certs, so I don’t think “simply not possible” is an accurate description of SSL as generally used by the broad web-dev community.
The question is how hard it is to detect tampering. My linux distribution builds firefox from source and signs the build. The builds are also checked to be reproducible.
> there’s a world of difference between automatic updates from e.g. Debian and automatic updates from Mozilla.
In what way?
This is obviously somewhat anecdotal, but...
I'm the developer of Heimdall. Software that flashes firmware onto Samsung phones. The software quite literally has the ability to replace almost every piece of software running on your phone. If it were compromised, it could not only own a user's phone, but also potentially everything a user accesses on said phone.
Sure my software is open-source, and I encourage anyone interested to inspect the code, I'm sure there are bugs. However, the `heimdall-flash` package in the official Debian repositories... I didn't make it, and I have no connection with whoever did. Now, don't be alarmed, despite being several years out of date, to the best of my knowledge it's a perfectly good package, and I'm thankful that the maintainer went to the effort. However, it would be so easy for someone to have published a malicious package. This is pretty powerful software, it has significantly more power than root on your mobile phone.
I love Debian, both philosophically and in practice. But does it really deserve your trust more than Mozilla?
It's perfectly normal for Debian packages to be maintained by other people that the original developers of that piece of software, isn't it? Debian has more than 60000 packages but doesn't have 60000 package maintainers – the roles are quite separate.
For example, Linus Torvalds doesn't maintain the Debian kernel packages. If whoever does were to put malicious code in the kernel packages, that would be very bad, just as if Heimdall were compromised, which is why Debian has a relatively small set of trusted package maintainers and doesn't let just anyone put code in the official distribution.
You saved some sprints but invalidated the purpose of the project. Very agile.
>Ultimately I think we can have web content from accounts.firefox.com be just as trustworthy as, say, a Mozilla-developed addon which might ship in the browser by default, which is a pretty high bar. We're not there yet, but it seems worth pursuing to try to get the best of both worlds.
The safety of the default installation is crowdsourced across all users and can't be targeted. The safety of the JS I load from Mozilla is not and I would have to verify its safety every time. Unless I'm misunderstanding something it can never be as trustworthy.