If you could safely (on the hardware level) replace image of the phone with another, it would be easy to guarantee that you can get a rootkit-free phone - all you need is a trusted image.
The Pegasus thing didn't even survive a reboot, it was reinstalled by using the 0-day again on a fresh boot. Replacing the image would have done nothing if they were flashing a version that still had the iMessage vulnerability.
Because these "amateurs" build all the essential tools we rely on today. That wasn't Apple. I cannot really believe what crap I have to read here. Vendor lock in is a huge factor for insecurity in software.
Amateurs behind what essential tools? Tell me a tool and a name. I've been thinking hard for 10 minutes and every FOSS tool I used the past week has highly regarded and well payed professionals behind it.
Are there actual hard numbers on whether open-to-all-eyes is beneficial at all scales?
For example, do public eyes actually catch and did more Linux bugs than three letter agencies? And would this situation be worse if Linux were a very well funded, closed source Windows?
I’m ignorant on whether the open source security mantra is founded upon religion or evidence.
> For example, do public eyes actually catch and did more Linux bugs than three letter agencies?
Is it so important, who found a bug? TLA can find a bug, and then it has a choice: TLA can use it to spy on other countries, or TLA can fix it to protect their own country.
Your TLA may choose to leave your country unprotected, but it is the problem of your country.
> it would be easy to guarantee that you can get a rootkit-free phone
The problem in this case is that you get the malware installed through a no-click required iMessage and not a "supply chain" attack on the image your phone is running on. How would that help?
It could help by simply being sufficiently different. The only reason this type of malware is such a widespread problem is the large monoculture of potential targets. Just like in agriculture (e.g. potatoes, bananas), a monoculture allows a single pathogen to affect an entire crop. In security this is a class break[1].
Utilizing different software implementations limits the scope of this type of attack. The current trend to increasing centralization and forced-update monoculture is a huge gift to malware authors: they only have to write one version of their malware to affect everyone.
This is a good principle in terms of reducing the overall blast radius of exploits. But to do this the implementations should genuinely be independent.
In practice we may find a monoculture within a hidden layer of the stack than we're optimizing for, such as an OS kernel method, TLS library or chipset which coincidentally has captured the entire market. When a clever enough exploit on a common resource is found, then the problem transforms to one of coordinating patching for the same, wherein a broad ecosystem of higher level components (like Android or PCs) becomes nearly impossible to thoroughly cover. As such malware authors may potentially still get away with writing a single version of their software so long as they target low-level enough. With sufficient fragmentation they don't even need to invent their own exploits, just use publicly known CVEs that they can brute-force against older devices.
(Not saying you're wrong, your recommendation may still be better in the long-run. We're after all weighing the risk level of black swan events, such as a zero-day on a low level of the stack, or a high level of the stack on a high-volume vendor)
How many people have seen the iMessage source code? A handful of devs at Apple? Closed, proprietary software by definition prevents "more eyes" from looking. Even if we consider an open source product where having "many eyes" review the code is at least hypothetically possible, a large number of people using the software doesn't imply there is also a large amount of people reviewing it.
>Closed, proprietary software by definition prevents "more eyes" from looking
But we were talking about the general case of monoculture, not closed source monoculture. Even for closed source software, where more eyes are prevented from looking "by definition", having a monoculture can in theory allow more code audits to be done, because of economy of scale.
> large number of people using the software doesn't imply there is also a large amount of people reviewing it.
Right, but roughly speaking, the number of reviewers should monotonically increase given an increase in users. Whether that produces better security overall is anyone's guess. My point was just that there was a counteracting force to consider.
The argument isn't that granting more freedoms to the owner of the device will magically make it more secure in all cases, for most it won't.
The argument is that removing freedoms from owners in the name of security is a false dichotomy because bad actors will still gain the ability to execute arbitrary code whilst owners of devices won't be able to do so.
Also, if I could provide the software I want to run, I'd probably not have iMessage.
Journalists (as here) don't usually get to choose the communication software their sources are comfortable communicating over. They install whatever's required to get the story. And they likely wouldn't install an OS that doesn't let them install such apps.
That's a niche use case. You might not be able to choose what app they require to communicate over, but you can choose what device to install it onto, like a burner, couldn't you? Some apps you might not mind on your personal phone, others you probably do.
Well, yes, but if you think about it, the whole point of a journalist's work phone is just to aggregate a bunch of "burner" accounts. And that's exactly what an attacker would want to steal from a journalist: conversations between them and (or contact details of) another source.
Which is all to say, ideally a journalist would have N phones, one per source. But that's impractical.
Would it? Wouldn't you still need privilege escalation? Having root access is different from being logged in as the root user. Of course being logged in as root comes with all the same security risks as it does if you do this on Linux. But no one uses root as their main account.
I bet you, that image will be provided by the trustworthy people from NSO, free of charge or at a price! Whatever makes you trust their image.
IMHO devices should be root-able but with high barriers of entry, something like soldering should be involved. If you are after doing something that you don't understand but a stranger on the internet told you to do it you shouldn't be able to do it.
I just want to remind you that quite recently a few police agencies come together, built a "secure messaging app", fed it to the criminals and tracked all their communication until gather enough information to take down their entire operation.[0]
Or the time when CIA run a Swiss encryption company[1]
The point is, you wouldn't know unless you have complete understanding on every aspect of your device and every bit of the software.
Nobody would be installing a Linux kernel and use the phone like that, they would be installing a distro. There are so many vectors of attack, the person who puts the distro together doesn't need to have malicious intent, the supply chain could be compromised.
I think it would have, because the primary attack vector is your messaging app. Some Android phones, such as mine, are locked in such a way that this cannot be uninstalled. I can use another messaging app but this one will still run on my phone which means that it can still be exploited.
Unfortunately, the only way to secure my phone because it no longer receives updates is through rooting, but this phone is not a model that can be rooted so my plan is to buy a new phone and root that, and probably remove all text messaging apps or find a way to sandbox them in a secure environment.
If anything, it would likely make it worse, since you'd now have "convincing the user to install your payload", recreating the phishing problems of desktop platforms.
The device would have to be treated as inherently untrustworthy, like your laptop or a PC in a cafe or library. That is unlike the (Edit: false) current expectation that the hardware and OS of the device are a trusted platform.
Once the system is compromised the best repair is a full reset (and even better swapping the device, not that the restore image has been tampered with ...) root powers are needed for analysis. But that's nothing a normal user can do ...
But on the larger point: I agree there should be an option for suers to replace firmware and become root. But limiting root access makes work for Pegasus and others harder, which is good.
> But limiting root access makes work for Pegasus and others harder, which is good.
It's not enough to "make it harder", to actually know whether it's a useful mitigation you would have to compare how much harder it makes it compared to what inconvenience it caused for that. Pegasus has no problem getting root right now. I strongly suspect they have a built up hoard of 0-days to apply in case the current faorite technique is patched (how else could you make a business out of it? If you're running a business you can't allow some other party to control your main product).
So, how much does limiting root access hurt Pegasus? Very little, IMO. A case could be made that it helps them, in the same way that excessive regulation helps large companies, which already have resources and experience dealing with it that smaller companies must overcome to enter the market. Pegasus, and the ability to hack into phones on-demand, may have been largely hidden from the public because it was relegated to a few large players.
And what does everyone get for this? Vendor lock-in, higher prices, less control over your own devices.
Or at least often not by law, there are some stupid laws around WiFi/broadband etc. which can be interpreted to state that it's not allowed for a phone to be sold which can be rooted (without a hack) as the user could use it to setup a WiFi hot-spot which uses non-legal frequencies. This law was made because supposedly that (with routers) is a problem, except it isn't as far as I know and it as pure lobby work from a certain industry which also loves the user to be forced to use their routers.
Right to root, is right to repair.