Hacker Timesnew | past | comments | ask | show | jobs | submit | astrobe_'s commentslogin

There's a negligible amount of "power users" among government employees; I think the majority of them are trained in reading and applying laws, and given the strong scientific/literary divide in the French culture, they usually think of themselves as inapt with computers (and the erratic behavior of MS products didn't help, if you ask me).

But knowing France, what to really worry about is execution, in particular for administrations. Probably people working there who read the TFA already think "oh, big mess incoming" even though they don't know what this "Linux" thing is.

I think standard IT/sysadmin training focuses mainly on Windows server etc., Linux being a second class citizen (because that's what the vast majority of small/mid sized businesses use). So recruiting good Linux sysadmins could be an issue, especially since the wages in government agencies are not exactly attractive.


85% of cloud servers are Linux. It's not a niche product for people who work with servers.

I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the customer; id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with at some point in the supply chain.

Computers should abide by their owners. Any computer not doing that is broken.

Its a simple solution in law to enable. Force manufacturers to allow owners of computer to put any signing key in the BIOS.

We need this law. Once we have this law, consumers csn get maximum benefit of secure boot withiut losing contorl


But that's how it already works.

If you install Windows first, Microsoft takes control (but it graciously allows Linux distros to use their key). If you install Linux first, you take control.

It's perfectly possible for you to maintain your own fully-secure trust chain, including a TPM setup which E.G. lets you keep a 4-digit pin while keeping your system secure against brute force attacks. You can't do that with the 1990s "encryption is all you need" style of system security.


It's funny, but I just encountered this for the first time the other day - feels like I had to do a lot of digging to find out how to do this so that I could add my LUKS key to my TPM... really felt like it took some doing on the HP all-in-one that I was trying to put debian on... maybe because it was debian being debian

Not really. There are many laptops where you cannot rrally get rid of Microsoft key and also cannot put your own key.

Most embedded processors sadly don't have a BIOS, and the signing key is permanently burned into the processor via eFUSEs.

Yes, BIOS is really a PC-thing, AFAIK. Embedded processors have "bootloaders" which often serve a similar purpose of performing the minimal viable hardware initializations in order to load the OS kernel.

> Its a simple solution in law to enable. Force manufacturers to allow owners of computer to put any signing key in the BIOS.

...it's already allowed. The problem is that this isn't the default, but opt in that you need quite a lot of knowledge to set up


I have set it up on worst laptops. There are laptops like hp x360 which doesn't allow modification at all.

I make the analogy with a company, because on that front, ownership seems to matter a lot in the Western world. It's like it had to have unfaithful management appointed by another company they're a customer of, as a condition to use their products. Worse, said provider is also a provider for every other business, and their products are not interoperable. How long before courts jump in to prevent this and give back control to the business owner?

This gets tricky. If I click on a link intending to view a picture of a cat, but instead it installs ransomware, is that abiding by its owner or not? It did what I told it to do, but not at all what I wanted.

We dont need to get philosophical here. You(the admin) can require you (the user) to input a password to signify to you(the admin) to install a ransomware when a link is clicked. That way no control is lost.

What if the cat pictures are an app too? The computer can't require a password specifically for ransomware, just for software in general. The UI flow for cat pictures apps and ransomware will be identical.

A computer that can run arbitrary programs can necessarily run malicious ones. Useful operations are often dangerous, and a completely safe computer isn't very useful.

Some sandboxing and a little friction to reduce mistakes is usually wise, but a general-purpose computer that can't be broken through sufficiently determined misuse by its owner is broken as designed.


If you connect your computer to the Internet, it can get hacked. If you leave it logged in unattended or don't use authentication, someone else can use it without your permission.

This isn't rocket science and it has nothing to do with artificially locking down a computer to serve the vendor instead of the owner.

Edit: I'd like to add that no amount of extra warranty from the vendors are going to cover the risk of a malware infection.


The ransomware can encrypt the files in your home directory just as well with secure boot enabled.

This is just another example of how secure boot provides zero additional security for the threat modes normal users face.



And what if that customer wants to run their own firmware, ie after the manufacturer goes out of business? "Security" in this case conveniently prevente that.

Well, that's a different market. What I say is that there are markets in which customers wants to be sure that the firmware is from "us".

And those markets are certainly not IoT gizmos, which I suspect induce some knee-jerk reactions and I understand that cause I'm a consumer too.

But big/serious customers actually look at the wealthiness of the company they buy from, and would certainly consider running their own firmware on someone else's product; they buy off-the-shelf products because it's not their domain of expertise (software development and/or whatever the device does), most of the times.


you click the box to turn off secure boot

And how do you do that on some locked down embedded device? Say, a thermostat for instance.

...and then some essential software you need to run detects that and refuses to run. See where the problem is here?

It does no such thing if you enrol your own keys using the extremely well documented process to do that.

It's fair to think of secure boot in only the PC context but the model very much extends to phones. It seems ridiculous to me that to use a coupon for a big mac I have to compromise on what features my phone can run (either by turning on secure boot and limiting myself to stock os or limiting myself to the features and pricing of the 1 or 2 phones that allow re-locking).

And the PC situation is only a leftover due to historical circumstances that will be "corrected" in due time. Microsoft already tried this once with their ARM devices.

Where is this "extremely well documented process" to enroll new signing keys on an embedded device? I don't see one for any of these embedded processors with secure boot.

https://pip-assets.raspberrypi.com/categories/1214-rp2350/do...

https://documentation.espressif.com/esp32_technical_referenc...

https://docs.amd.com/v/u/en-US/ug1085-zynq-ultrascale-trm


Tradeoffs. Which is more likely here?

1. A customer wants to run their own firmware, or

2. Someone malicious close to the customer, an angry ex, tampers with their device, and uses the lack of Secure Boot to modify the OS to hide all trace of a tracker's existence, or

3. A malicious piece of firmware uses the lack of Secure Boot to modify the boot partition to ensure the malware loads before the OS, thereby permanently disabling all ability for the system to repair itself from within itself

Apple uses #2 and #3 in their own arguments. If your Mac gets hacked, that's bad. If your iPhone gets hacked, that's your life, and your precise location, at all times.


1. P(someone wants to run their own firmware)

2. P(someone wants to run their own firmware) * P(this person is malicious) * P(this person implants this firmware on someone else’s computer)

3. The firmware doesn’t install itself

Yeah I think 2 and 3 is vastly less likely and strictly lower than 1.


As an embedded programmer in my former life, the number of customers that had the capability of running their own firmware, let alone the number that actually would, rapidly approaches zero. Like it or not, what customers bought was an appliance, not a general purpose computer.

(Even if, in some cases, it as just a custom-built SBC running BusyBox, customers still aren't going to go digging through a custom network stack).


The customers don't have to install the firmware themselves, they can have a friend do it or pay a repair shop. You know, just like they can with non-computerized tools that they don't fully understand.

I’m not talking about your buddy’s Android phone, the context was embedded systems with firmware you’re not going to find on xda developers. A “friend” isn’t going to know jack shit about installing firmware on an industrial control.

This guy thinks that if you rephrase an argument but put some symbols around it you’ve refuted it statistically.

P(robably not)


The argument is that P(customer wants to run their own firmware) cancels out and 2,3 are just the raw probability of you on the receiving end of an evil maid attack. If you think this is a high probability, a locked bootloader won’t save you.

Very neat, but 1) is not really P(customer wants to run their own firmware), but P(customer wants to run their own firmware on their own device).

So, the first term in 1) and 2) are NOT the same, and it is quite conceivable that the probability of 2) is indeed higher than the one in 1) (which your pseudo-statistical argument aimed to refute, unsuccessfully).


As if the monetary gain of 2 and 3 never entered the picture. Malicious actors want 2 and 3 to make money off you! No one can make reasonable amounts of money off 1.

I encourage you to re-evaluate this. How many devices do you (or have you) own which have have a microcontroller? (This includes all your appliances, your clocks, and many things you own which use electricity.) How many of these have you reflashed with custom firmware?

Imagine any of your friends, family, or colleagues. (Including some non-programmers/hackers/embedded-engineers) What would their answers be?


I would reflash almost all my appliances if I could do so easily since they all come with non-optimal behavior for me.

On Android, according to the Coalition Against Stalkerware, there are over 1 million victims of deliberately placed spyware on an unlocked device by a malicious user close to the victim every year.

#2 is WAY more likely than #1. And that's on Android which still has some protections even with a sideloaded APK (deeply nested, but still detectable if you look at the right settings panels).

As for #3; the point is that it's a virus. You start with a webkit bug, you get into kernel from there (sometimes happens); but this time, instead of a software update fixing it, your device is owned forever. Literally cannot be trusted again without a full DFU wipe.


And where are the stats for people running their own firmware and are not running stalkerware for comparison? You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?

The entirety of GrapheneOS is about 200K downloads per update. Malicious use therefore is roughly 5-1.

> You don’t need firmware access to install malware on Android, so how many of stalkerware victims actually would have been saved by a locked bootloader?

With a locked bootloader, the underlying OS is intact, meaning that the privileges of the spyware (if you look in the right settings panel) can easily be detected, revoked, and removed. If the OS could be tampered with, you bet your wallet the spyware would immediately patch the settings system, and the OS as a whole, to hide all traces.


LineageOS alone has around 4 million active users. So malicious use is at most 1:4, not 5:1.

Assuming that we accept your premise that the most popular custom firmware for Android is stalkerware (I don’t). This is of course, a firmware level malware, which of course acts as a rootkit and is fully undetectable. How did the coalition against stalkerware, pray tell, manage to detect such an undetectable firmware level rootkit on over 1 million Android devices?

> The entirety of GrapheneOS is about 200K downloads per update. Malicious use therefore is roughly 5-1.

Can you stop this bad faith bullshit please? "Stalkerware" is an app, not an alternate operating system, according to your own source. You're comparing the number of malicious app installs to the number of installs of a single 3rd party Android OS which is rather niche to begin with.

You don't need to install an alternate operating system to stalk someone. And in fact that's nearly impossible to do without the owner noticing because the act of unlocking the bootloader has always wiped the device.

> The Coalition Against Stalkerware defines stalkerware as software, made available directly to individuals, that enables a remote user to monitor the activities on another user’s device without that user’s consent and without explicit, persistent notification to that user in a manner that may facilitate intimate partner surveillance, harassment, abuse, stalking, and/or violence. Note: we do not consider the device user has given consent when apps merely require physical access to the device, unlocking the device, or logging in with the username and password in order to install the app.

> Some people refer to stalkerware as ‘spouseware’ or ‘creepware’, while the term stalkerware is also sometimes used colloquially to refer to any app or program that does or is perceived to invade one’s privacy; we believe a clear and narrow definition is important given stalkerware’s use in situations of intimate partner abuse. We also note that legitimate apps and other kinds of technology can and often do play a role in such situations.

- https://stopstalkerware.org/information-for-media/


This assumes a high level of technical skill and effort on the part of the stalkerware author, and ignores the unlocked bootloader scare screen most devices display.

If someone brought me a device they suspected was compromised and it had an unlocked bootloader and they didn't know what an unlocked bootloader, custom ROM, or root was, I'd assume a high probability the OS is malicious.


> And that's on Android which still has some protections even with a sideloaded APK (deeply nested, but still detectable if you look at the right settings panels).

Exactly, secure boot advocates once again completely miss that it doesn't protect against any real threat models.


Clearly you’ve never met my ex’s (or a past employer). Not even being sarcastic this time.

You expect that stuff to happy with 3 letter agencies.

Sorry, I have no idea what you are trying to say.

> 2. Someone malicious close to the customer, an angry ex, tampers with their device, and uses the lack of Secure Boot to modify the OS to hide all trace of a tracker's existence, or

Lol security people are out of their mind if they think that's actually a relevant concern.

> 3. A malicious piece of firmware uses the lack of Secure Boot to modify the boot partition to ensure the malware loads before the OS, thereby permanently disabling all ability for the system to repair itself from within itself

Oh no so now the malware can only permanently encrypt all the users files and permanently leak their secrets. But hey at least the user can repair the operating system instead of having to reinstall it. And in practice they can't even be sure about that because computers are simply too complex.


#2 and #3 are fearmongering arguments and total horseshit, excuse the strong language.

Should either of those things happen the bootloader puts up a big bright flashing yellow warning screen saying "Someone hacked your device!"

I use a Pixel device and run GrapheneOS, the bootloader always pauses for ~5 seconds to warn me that the OS is not official.


Yes. They're making the point that your flashing yellow warning is a good thing, and that it's helpful to the customer that a mechanism is in place to prevent it from being disabled by an attacker.

No, they've presented a nonsense argument which Apple uses to ban all unofficial software and firmware as if it had some merit.

Then that customer shouldn't buy a device that doesn't allow for their use case. Exercise some personal agency. Sheesh.

What happens when there are no more devices that allow for that use case? This is already pretty much the case for phones, it's only a matter of time until Microsoft catches up.

There are still phones not obeying the megacorps. Sent from my Librem 5.

Does your Librem 5 run banking apps, though?

Waydroid allows to run Android apps that don't require SafetyNet. If your bank forces you into the duopoly with no workaround, it's a good reason to switch.

And you only have that option as long as people oppose that secure boot enabled dystopia.

I don't know about executable signing, but in the embedded world SecureBoot is also used to serve the PRODUCER; id est provide guarantees to the PRODUCER that the firmware of the device they SELL has not been tampered with at some point in the PROFIT chain.

In my case a firmware provider went out of business, and in one particular device the firmware gets stuck in an endless boot loop. It tries to calibrate some led's, but forgets to round some differences, so it can never converge to a proper calibration.

Device is bricked, firmware is secured with a signing key, refactoring a new device is pretty hard. The current one needed 10 years of development. I'm on the wait to either patch the firmware by finding the problematic byte (if it's patchable, round() needs much more), or to wait for the original dev willing to release an update on his own. BTW Claude opus got much better than ghidra lately. It's perfect.

I see the value of protected firmware updates, but business has to survive also.


Frankly: that's stupid. In case you didn't figure it out, I work in the field and I can tell you that this is was not the mindset at the places where I worked.

> id est provide guarantees to the customer that the firmware of the device they receive has not been tampered with

The firmware of the device being a binary blob for the most part... Not like I trust it to begin with.

Whereas my open source Linux distribution requires me to disables SecureBoot.

What a world.


You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.

There's also plenty of folks combining this with TPM and boot measurements.

The ugly part of SecureBoot is that all hardware comes with MS's keys, and lots of software assume that you'll want MS in charge of your hardware security, but SecureBoot _can_ be used to serve the user.

Obviously there's hardware that's the exception to this, and I totally share your dislike of it.


> You can set up custom SecureBoot keys on your firmware and configure Linux to boot using it.

Right, but as engineers, we should resist the temptation to equate _possible_ with _practical_.

The mere fact that even the most business oriented Linux distributions have issues playing along SecureBoot is worrying. Essentially, SB has become a Windows only technology.

The promise of what SB could be useful for is even muddier. I would argue that the chances of being victim of firmware tampering are pretty thin compared to other attack vectors, yet somehow we end up all having SB and its most significant achievement is training people that disabling it is totally fine.


+1

An unsigned hash is plenty guard to against tampering. The supply chain and any secret sauce that went into that firmware is just trust. Trust that the blob is well intentioned, trust that you downloaded from the right URL, checked the right SHA, trust that the organization running the URL is sanctioned to do so by Microsoft...

Once all of that trust for every piece of software is concentrated in one organization, Microsoft, Apple or Google, is has become totally meaningless.


It's to serve the regulators. The Radio Equipment Directive essentially requires the use of secure boot fir new devices.

I happen to like knowing that my mobile device did not have a ring 0 backdoor installed before it left the factory in Asia. SecureBoot gives me that confidence.

No it doesn't? The factory programs in the secure boot public keys

The public keys are provided by the developer. Google, or Apple, for example. It's how they know that nothing was tampered with before it left the factory.

Nothing has been tampered with doesn't mean there's no factory backdoor, it just only means same as factory, nothing more.

Apple or Google know what the cryptographic signature of the boot should be. They provide the keys. It's how they know that "factory reset" does not include covert code installed by the factory. That's what we're talking about.

This is true for phones but not for IoT in general.

well, unless govt tells MS to tamper it

One thing that confused me in TFA is that it says that "[neanderthals were] maybe a couple of thousand breeding individuals", yet they were enough to inter-breed with sapiens at some point(s) [1]. In my mind, tribes of "far-flung populations of just a few dozen individuals" would be shy and difficult to find.

[1] https://en.wikipedia.org/wiki/Neanderthal_genetics


Ambiguë (ambiguous) and aiguë (acute) [1], but these are "old" spellings.

For instance, this word "ambiguë" was changed in the 1990 spelling reform to "ambigüe" [2] probably to emphasis the fact that the U is not mute (because for most -gue words it is, like for "fatigue" in french and english).

Like with ï and ü, the tréma mark is precisely the mark of an exception.

[1] https://fr.wiktionary.org/wiki/ambigu%C3%AB , https://fr.wiktionary.org/wiki/aigu%C3%AB

[2] https://en.wiktionary.org/wiki/ambig%C3%BCe


> It only really makes sense for extremely memory-constrained embedded systems

Even "mildly" memory constrained embedded systems don't use swap because their resources are tailored for their function. And they are typically not fans [1] of compression either because the compression rate is often unpredictable.

[1] Yes, they typically don't need fans because overheating and using a motor for cooling is a double waste of energy.


It's pretty hard to hide it from anything. Its surface is ~17000 m² (a tennis court is ~260 m²), and is 75 m high (~ 25 floors building - probably half of it under water, but still). And that's a mid-sized carrier according to Wikipedia.

It's not built for hiding at all, that's what submarines are for (and that's where our nukes are).


But the ocean is very very huge to find it still.


You don't have to search the entire planet. A carrier's general location is always semi-public. There are websites dedicated to tracking them, just like jets. And carriers roll with an entire strike group of 8-10 ships and 5-10K personnel, which are together impossible to miss.

A carrier strike group isn't meant to be stealthy. Quite the opposite. It is the ultimate tool for power projection and making a statement. If it is moving to a new region it will do so with horns blaring.

Obviously troops shouldn't be broadcasting their location regardless, but this particular leak isn't as impactful as the news is making it out to be.


https://en.wikipedia.org/wiki/SOSUS

Am I supposed to believe we live in a world where this exists, yet carriers are impossible to find and track on the sea?

Besides, modern fighter jets have radars with 400km detection ranges against fighter sized targets.

A dozen of them or more specialized sensor aircraft could cover entire conflict zones.


Of course it's possible to find a giant ship. The interesting parts are that this vector is crazy cheap using public APIs, and the irony of the location source being the voluntary-or-ignorant active telemetry from a US service person.

It's possible to go to the moon, launch ICBMs, and make fusion bombs. It's news when something possible gets cheap and easy. It's also newsworthy when one of the most powerful and expensive weapon platforms in history doesn't have its infosec buttoned down.


Interesting point. On one hand they probably don't care if everyone knows where the carrier is (actually I'm pretty sure every military power knows where the other powers' military is), on the other hand from a "good practices" perspective, it doesn't look good.

Would it just be virtue signaling, or is there more to it?


>It's also newsworthy when one of the most powerful and expensive weapon platforms in history doesn't have its infosec buttoned down.

Well, peace makes you sloppy. No one is at war with France right now, and no one is realistically going to attack this ship.

If we were fighting WW3, you can bet sailors wouldn't be allowed to carry personal cellphones at all. Back in WW2, even soldier's letters back home had to be approved by the censors.


And American carriers never operate alone, it's a whole Carrier Battle Group there.


The battle group doesn't cruise around in formation, for specifically this reason.


Ah, yes, Ticonderogas should be so far from the carrier so they couldn't even protect it, despite protecting their carrier is their main duty. Makes sense.


Is that what I said?


You must chose between being pedantic or having a common sense.


Well clearly since the De Gaulle is using a fitness app it's working on it.


If they were trying to hide it, the top would probably be painted blue.


Data cache issues is one case of something being surprising slow because of how data is organized. That said, Structure of Arrays vs Array of structures is an example where rule 4 and 5 somewhat contradict each other, if one confuses "simple" and "easy" - Structure of Array style is "harder" because we don't see it often; but then if it's harder, it is is likely more bug-prone.


> The only downside was anything you shared with her would be spread in the entire village before dawn

It's a better service than FB or Instagram that depress because people only show their good sides there... As you said, she was an essential part of the community ;-)


> It's a better service than FB or Instagram that depress because people only show their good sides there...

Sadly it's not only that. Social networks are "half-duplex" where you most likely to broadcast or consume at a time. it's not a true dialog. it made FOMO a thing. and worse, it's not only used for showing good, But it's being used to make complicated world events into bite-size good/bad dividing humanity instead of embracing and considering the complexity.


The problem simplicity is facing is mentioned in TFA with the keyword "future-proof", which is the typical instance of FUD in software. It is extremely difficult to fight against it, as, just like fake news, it would take 10 times more effort to debunk it than to make it. Yes, you spell out the cost of the additional layer, but it is invariably answered with "that's not so expensive", and risk aversion does the rest.


> Object-oriented Forth? Far out.

The cost is stupidly high, though. Look at the source code of [1].

The only good page to take from OOP book is the automatic and implicit pseudo-variable "self" or "this", that can reduce stack juggling significantly. I've implemented that in my (yet to be published) dialect and it works like a charm. In my experience, you can have that for cheap and anything more is not worth it from the point of view of a byte-counting Forth programmer.

[1] https://vfxforth.com/flag/swoop/index.html


Yeah, I didn’t mean far out as in good. Some people would say that the important thing to take from OOP is message passing. Which I assume is a no go in Fort? Regardless of dialect.


In communication protocols, you typically send a symbol which tells the receiver the meaning and the syntax of the message, and then the data attached to the message. In technical terms, messages belong to different application protocol data units ("APDU"). The receiver typically uses the APDU symbol (which can be just e.g. a byte) to dispatch the message internally to the right processing routine.

Message passing in OOP is the same thing, and it's ultimately about late binding. Late binding has, indeed, as much presence as dynamic typing in Forth, contrary to other scripting languages like Lisp or Lua where they are cornerstones, so to speak. Forth is firmly in the early binding camp, to the point that it does even know forward declarations [1]. Forth programmers won't do anything unnecessary, and so they expect their system won't do anything they don't need.

[1] Many scripting languages realized that skipping declarations to be "user-friendly" was a big design mistake and eventually implemented some sort of "strict" mode that became the de facto default. So they have two language features that cancel each other...


Thanks for the explanation! As you can tell, I'm very ignorant of Forth.

But I'm not sure I quite follow what you're saying. Forth has early binding, explicit forward declarations, and message passing but not in the usual OOP late binding sense. Is that right?


You're welcome. Actually, Forth doesn't have forward declarations as a "direct" feature, but it has what I would call "function values", which can be used like functions and changed almost like variables [1]. This is used for all kind of things, including forward declarations because otherwise some recursive algorithms would be very painful to write if not impossible. It's still not ideal because it is an unnecessary indirection.

It could have message passing by doing what I described earlier, like you would do in C by having a switch-case function that dispatches messages to dedicated routines. It would even be easier because there's no type checking, and more specifically no function signatures.

That's something one could do in specific cases where it is advantageous, but otherwise I would say it is "anti-idiomatic" for Forth. In particular, although Forth has no problem with variadic functions and multiple returns, including variable number of results [2], it is discouraged.

Forth is generally trying hard to push things from run-time to compile-time, and Chuck Moore was very pleased to find a way to push things from compile-time to edit-time with ColorForth [3]: the user, by switching the colors of tokens, tells the system what it should do with it: compile it, or execute it right now, or do nothing because it is comment.

[1] https://forth-standard.org/standard/core/DEFER

[2] https://forth-standard.org/standard/core/qDUP

[3] https://colorforth.github.io/cf.htm


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: