Got it running with 4800MT/s and literally 30 minute boot times in an AM5 machine. The 30 minute boot time could be worked around by enabling the (off-by-default) memory context restore option in BIOS, but it really made me think something was broken and it wasn't until I found other people talking about 30 minute boot times that I stopped debugging and just let it sit for an eternity.
It's so bad. I don't get why they sell AM5 motherboards with 4 RAM slots.
At least that system has been running well for like two years. But had I known that the situation is so much more dire than with DDR4, I would've just gotten the same amount of RAM in two sticks rather than four.
I’m in the same situation! My machine will take 2-5 minute to post every few reboots, it seems random. The messed up part is the marketing material says this things can handle 256gb of ram or whatever absurd number, f me for thinking then 128gb should be no problem. Honestly this whole thing has soured me on AMD. Yea they have bigger numbers than intel but at what cost, stability?
It's the RAM. It needs to "trained" which takes some time but for for some reason these boards seem to randomly forget their training, requiring it to happen again.
I've never had memory training be forgotten with my AM4 nor LPDDR5-based laptops and NUCs. Is this a new thing with AM5 or something? Or just a certain brand of BIOSes?
It's a common issue on consumer boards with DDR5 and more than two DIMMs installed.
Doesn’t affect soldered memory or lower speed memory (like DDR4). Many memory controllers fail to achieve good speeds and timings at all on 4 DDR5 DIMMs, and fall back to running DDR5 at 3600MHz instead.
Ok, so user selects too-high speed, controller tries for ages and fails, but doesn't save since it's overridden by user in BIOS?
I distinctly recall thinking my LPDDR5 NUCs were broken since they seemingly didn't boot the first time, until I recalled the training stuff. Took up to 15 minute on one of them. But neither has had any issues since, hence my question.
DDR5 is much, much more fickle than DDR4 and earlier standards. I think it's primarily due to pushing clock speeds (6000 MT/s would be insanely fast for DDR4, but kinda slow for DDR5).
Memory training has always been a thing: during boot, your PC runs tests to work out what slight changes between signals and stuff it needs to adapt to the specific requirements of your particular hardware. With DDR4 and earlier, that was really fast because the timings were so relatively loose. With DDR5, it can be really slow because the timings are so tight.
You need to enable MCR (which trains the memory once and caches the result for (iirc) 30 days) otherwise yeah, booting is horribly slow, even the 64GB I have can take several minutes but with MCR it boots basically instantly.
Memory training seems to be getting faster with each bios update. In 2024 when I upgraded to AM5, 64GB memory training took like 15 minutes. Now the same setup takes about a minute when it needs to retrain, then near instant with MCR (Windows 11 takes significantly longer to load than the POST process).
I’m running 128gb on a 9550x now with 4x32gb sticks and it’s terrible. It’s unstsable, post time is about 2 minutes (not exaggerating)and I’m stuck at a lower speed.
I’m considering just taking 2 of the sticks out and working with 64gb and increasing my swap partition. The nvme drive is fast at least.
This is my first time off intel and I have to say I don’t understand the hype.
> It’s unstsable, post time is about 2 minutes (not exaggerating)
The long POST times must mean it's retraining the memory each time, which is not normal. Just in case you haven'ttried it yet, I'd start by reseating them, I've had weird issues with marginally seated RAM before.
Also you definitely have to go much slower with 4 sticks compared to two, so lower speed as much as you can. If that doesn't help, I'd verify them in pairs.
If they work in pairs but not in quad at the slowest speed, something is surely wrong.
Once you get them working in quad, you can start bumping up the speed, might need voltage boost as well.
I just yanked two of the sticks out. Who knows, maybe I'll sell them. 64gb is sufficient most of the time anyway, and now I'm running at 4800 instead of 3600 and the boot is much faster. Thanks AMD!
It's been a long time since I came across Nim. I thought it was really interesting about 12 years ago. What made you land on Nim instead of any of the more obvious alternatives?
I was looking for something that allows easy access to direct memory, with a syntax thats a little easier to explain than C. Frankly, zig was not actually a real viable option based on that syntax requirement but I still wanted to explore it.
Yeah, for a language that claims to be a better modern alternative to C, zig verbose syntax is really an eyesore to look at compared the very same codebase written in C...
Nim is really incredible. The only things I cannot get over is the fact that it goes the inheritance route in a way I find to be hacky and fragile (no more than one level, really?) and traits are not a core feature. If Nim's primary approach was composition + Rust-style traits (that work at compiletime and runtime), I'd have a hard time wanting to use anything else.
I haven't bought an 8GB laptop since probably 2012 when I got a Sony Vaio that they upgrade to 12GB for free because of a delivery delay. I wouldn't buy an 8GB device in 2026, but this device isn't targeted at either of us.
For a lot of people who are looking at sub $800 laptops, the option to get an Apple will probably be enough to convince them. And apart from the limited memory, it really isn't a bad buy.
I also fully expect most budget devices to ship with 8GB of memory until the end of the DDR5 crisis anyway.
Flash has finite write endurance. NVMe swap can burn through it pretty quick. Which is isn't that bad because if it wears out you can replace it... unless the drive is soldered.
Mac SSDs are expected to last 8-10 years, even with high use. though Apple don't publish these values specifically, it's possible to start to extrapolate from the SMART data when it starts showing errors.
A good SSD ought to be able to cope with ~600TBW. My ~4.5-year-old MBP gives the following:
smartctl --all /dev/disk0
...
Data Units Read: 1,134,526,088 [580.8 TB]
Data Units Written: 154,244,108 [78.7 TB]
...
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
...
I'm sure an 8GB RAM machine would use more swap than my 16GB one, but probably not much more, given that mine has had heavy use for development and most people don't use their laptops for anything like that. Even so, that would still put it well within the expectation of 8-10 years, and that's for a $600 laptop.
> I'm sure an 8GB RAM machine would use more swap than my 16GB one, but probably not much more
It's non-linear. If you have a 17GB working set size, a 16GB machine is actively using 1GB of swap, but the 8GB machine is using 9GB. If you have a 14GB working set size, the 16GB machine doesn't need to thrash at all, but the 8GB machine is still doing 6GB.
Meanwhile "SSDs are fast" is the thing that screws you here. Once your actual working set (not just some data in memory the OS can swap out once and leave in swap) exceeds the size of physical memory, the machine has to swap it in and back out continuously. Which you might not notice when the SSD is fast and silent, but now the fact that the SSD will write at 2GB/sec means you can burn through that entire 600TBW in just over three days, and faster drives are even worse.
On top of that, the write endurance is proportional to the size of the drive. 600TBW is pretty typical for the better consumer 1TB drives, but a smaller drive gets proportionally less. And then the machines with less RAM are typically also paired with smaller drives.
Most people using these things aren't going to be using more than 8GB on an ongoing basis, and if they do, they'll not be swapping it like mad as you suggest, because it's only on application-switch that it will matter.
As for 600TB in just over 3 days, I want some of what you're smoking.
> Most people using these things aren't going to be using more than 8GB on an ongoing basis, and if they do, they'll not be swapping it like mad as you suggest, because it's only on application-switch that it will matter.
To begin with, a single application can pretty easily use more than 8GB by itself these days.
But suppose you are using multiple applications at once. If one of them actually has a large working set size -- rendering, AI, code compiling, etc. -- and then you run it in the background because it takes a long time (and especially takes a long time when you're swapping), its working set size is stuck in physical memory because it's actively using it even in the background and if it got swapped out it would just have to be swapped right back in again. If that takes 6GB, you now only have 2GB for your OS and whatever application you're running in the foreground. And if it takes 10GB then it doesn't matter if you're even running anything else.
Now, does that mean that everybody is doing this? Of course not. But if that is what you're doing, it's not great that you may not even notice that it's happening and then you end up with a worn out drive which is soldered on for no legitimate reason.
> As for 600TB in just over 3 days, I want some of what you're smoking.
2GB/s is 8200GB/hour is 172.8TB/day. It's the worst case scenario if you max out the drive.
In practice it might get hot and start thermally limiting before then, or be doing both reads and writes and then not be able to sustain that level of write performance, but "about a week" is hardly much better.
Yeah dude, "Rendering, AI, code compiling,..." is not the target market for this device. It's just not.
> 2GB/s is 8200GB/hour is 172.8TB/day. It's the worst case scenario if you max out the drive.
Right, which is completely and utterly unrealistic. As I said, I want what you're smoking.
I have an 8GB M1 mini lying around somewhere (I just moved country) which was my kids computer for several years before he got an MBP this Xmas. He had the sort of load that would be more typical - web-browsing, playing games, writing the occasional thing in Pages, streaming video, etc. etc. If I can find it (I was planning on making it the machine to manage my CNC) I'll look at the SMART output from that. I'm willing to bet it's not going to look much different from the above...
> Yeah dude, "Rendering, AI, code compiling,..." is not the target market for this device. It's just not.
None of the people who want to do those things but can't afford a more expensive machine will ever attempt to do them on the machine they can actually afford then, is that right?
> Right, which is completely and utterly unrealistic.
"Unrealistic" is something that doesn't happen. This is something that happens if you use that machine in a particular way, and there are many people who use machines in that way.
> He had the sort of load that would be more typical - web-browsing, playing games, writing the occasional thing in Pages, streaming video, etc. etc.
Then you would have a sample size of one determined by all kinds of arbitrary factors like whether any of the games had a large enough working set to make it swap, how many hours were spent playing that game instead of another one etc.
The problem is not that it always happens. The problem is that it can happen, and then they needlessly screw you by soldering the drive.
> The problem is not that it always happens. The problem is that it can happen
Ah. So, FUD, then. Gotcha.
“This ridiculously unlikely scenario is something I’m going to hype up and complain about because I don’t like some aspects of this companies business model”.
600 TBW in 3 days. Pull the other one, it’s got bells on.
I’ve never had an SSD crap out because of read/write cycle exhaustion, and I’ve been using SSD almost exclusively, for over a dozen years. I’ve had plenty of spinning rust ones croak, though. You don’t solder those in, so it’s not really a fair comparison.
I did have one of those dodgy Sandisks, but that was a manufacturing defect.
If you have 24GB of RAM and a 12GB working set then it's fine. Likewise if you have 8GB of RAM and a 4GB working set. But 8GB of RAM and a 12GB working set, not the same thing.
Most flash memory will happily accept writes long after passing the TBW 'limit'. If write endurance would be that much of a problem I'd expect the second hand market to be saturated with 8Gb M1 MacBooks with dead SSDs by now. Since that's obviously not the case I think it's not that bad.
> Most flash memory will happily accept writes long after passing the TBW 'limit'.
That's the problem, isn't it? It does the write, it will read back fine right now, but the flash is worn out and then when you try to read back the data in six months, it's corrupt.
> If write endurance would be that much of a problem I'd expect the second hand market to be saturated with 8Gb M1 MacBooks with dead SSDs by now.
That's assuming it's sufficiently obvious to the typical buyer. You buy the machine with a fresh OS install and only newly written data, everything seems fine. Your 30 day warranty/return period expires, still fine. Then it starts acting weird.
> That's the problem, isn't it? It does the write, it will read back fine right now, but the flash is worn out and then when you try to read back the data in six months, it's corrupt.
SSD firmware does patrol reads and periodically rewrites data blocks. It also does error correction. Cold storage is a known issue with any SSD, but I don't have any insight in how bad this problem is in reality.
Of course it will wear out eventually, but so will the rest of the system components. There's nothing to be gained by making SSDs that last 30 years when the other components fail in 15.
> Then it starts acting weird.
Is that speculation or do you have any facts to back that up?
I used to run Linux (JLime Linux) and NetBSD on those. I did prefer the bigger NEC MobilePro competitors though, but I spent so much time on those Jornadas in college.
Kristoffer Ericson was the driving force behind JLime Linux.
Along with OpenZaurus, these early hobbyist efforts to run Linux on embedded devices formed the basis of what became OpenEmbedded and has since been renamed Yocto, still one of the most commonly used embedded Linux development platforms.
Same. I was on macOS for work for about 3 years. Never gelled with me.
I was on an M2 Macbook Pro with Asahi and it was great. It's really hard to fault Apple's hardware for most use cases.
I'm currently on a Strix Halo laptop (HP Zbook), which is about as expensive, and the hardware is great, but power efficiency and build quality lag leagues behind by Apple. A 4000 euro laptop still feels like a cheap toy.
Currently in a brief macos phase before I can be issued my Linux laptop at work. It's so clunky. A major annoyance for me right now is the lack of MST multi-screen over USB which means my nice daisy-chained home setup is fine on my near-decade-old Dell but doesn't work at all on the fancy Macbook. They have the hardware to support it, they just don't.
Generally the hardware with Apple is amazing but I'll take the hit on that and things like battery life just to get an OS that feels like it's on my side.
I'd maybe consider Asahi for home use but I'd be wary of it for work. Perhaps in a few years.
I'm fine with providing my identity for online banking and other finance platforms for legal & taxation purposes.
I can't think of a single other use case in which I'd be willing to verify my identity. I'd rather go back to hosting email myself, and am fine with circumventing content access control for all other platforms for personal use.
We're seeing the world slide towards authoritarian strongmen, and we want to give them a massive index of who we are and what we do? I'd rather not.
The problem is those self-same authoritarian strongmen are very successfully using sockpuppeting to change national discourses in ways that benefit them and are detrimental to the targeted countries. Hybrid war is real and has been ongoing for more than a decade. LLMs make it way more cost effective.
Being able to limit the influence of external bad actors is the main goal of ID verification. Age verification is a useful side effect that makes it easier to sell to the general public.
Big Tech has had at least a decade to fix this, did nothing of note, and is all out of ideas. Privacy advocates had the same time to figure out a "least bad" technical solution, but got so obsessed with railing against it happening at all, that nothing got any traction.
So governments are here to legislate, for better or worse. They know it's a trade-off between being undermined by external forces vs. the systems being abused by future governments, but their take is that a future authoritarian government will end up implementing something similar anyway.
> Being able to limit the influence of external bad actors is the main goal of ID verification. Age verification is a useful side effect that makes it easier to sell to the general public.
How? People already sell their accounts to spammers. Why would that change?
Depending on the implementation, I could see that having rate limiting effects. There're only finitely many IDs so scaling sockpuppeting will saturate these IDs quickly but it's quite easy to spin up a new anonymous account. For example, I think the EU ID system has an upcoming way to create pseudo anonymous identifiers that can identify a user per website.
This presents the problem of governments being able to gatekeep speech which I am quite uncomfortable with but maybe there's some safeguard within the eIDAS proposal that makes this idea incorrect?
The internet is for the free exchange of ideas! Why would we want to limit it because some random gov somewhere is writing comments? Allow your citizens to think!
> Being able to limit the influence of external bad actors is the main goal of ID verification.
How does automatically determining your age serve the goal of ID verification? It seems like most sites are choosing this as the first option. If the point was to link your ID, why wouldn't they ask everyone to provide it?
> "Democracy" is when "bad actors" (as defined by the establishment) are shut out of all online discourse.
The point of ID laws is not to stop "bots" or "sockpuppets", it's to enable governments to shut down the speech of their political adversaries by painting them as dangerous. That is not democracy, that is authoritarianism, even if you absolutely hate the people that are being shut up.
Western countries are not in the midst of polarized political crises because of "external bad actors" or "sockpuppets". They're in these crises because of fundamental contradictions in values and desired policies between different segments of the populace.
The Europeans are currently full steam ahead in attempting to "fix" the situation by criminalizing dissent, which will, in the end, only exacerbate the political crisis by making the democratic system illegitimate.
The Internet is already all but dead. We could fix it (as I propose). Or we let it die.
I'm fine with either outcome.
> criminalizing dissent
When has that not been true? Serious question.
Socrates was compelled to commit suicide. Jesus was nailed to a cross. Journalist and activists are routinely murdered. How many political prisoners are there right now?
Probably the lack of pictures. Maybe the moderation. Maybe the slight niche.
It could die if it becomes profitable to spamers. Or maybe it's dead now and one or both of us are llms.
But as long as the content quality meets my personal utility threshold, it makes sense for me to visit it, regardless of whether it is a victim of DIT. Ultimately it's probably up to webmasters to understand if the traffic on their site is either profitable or of a high enough quality to justify the operating costs of a hobby.
No ads. No algorithmic hate machine. Active moderation.
Two other fine examples of thriving online communities are metafilter and ravelry.
I'm sure there's many more on the web. I just don't get out much.
And many, many not on the web. Using discord, telegram, old school BBSes, etc. But, as dead Internet theory notes, they're not publicly visible and therefore not discoverable, not being indexed.
Do you truly believe that ID "verification" will do anything in a world where IDs are leaked by the tens of thousands to the millions?
You are shifting the onus on to the platforms, when the problem is pretty simple; with a few exceptions, we've failed as a species to learn how to think.
Also do you think that the TLAs don't know who the bots most likely are with all the surveillance data they're gathering? That the NSA doesn't have detailed telemetry of the surveillance ops??
Let me ask you the question, what have they done about it? And why not?
> Being able to limit the influence of external bad actors is the main goal of ID verification.
Then they should say so. Elected officials lying to and misleading the public when their real intentions differ is almost criminal. It's not a behavior anyone should ever support. I will not vote for people who do that.
The stated reason is also true in most cases. Imgur was caught harvesting and selling childrens’ data for advertising purposes, TikTok and others are also known to do this. There’s only so long you can avoid fixing a problem before states start to step in.
I don't think "we technically don't lie, we're only actively deceiving you" is a good defense strategy. The politicians you defend need to decide on a narrative for the justification, meandering between different ones is not increasing credibility and a problem on its own.
I would say the time to buy mesh networking equipment is now. But it's not like I'm capable of defending the transmitter. So when they come for the VPNs, the VPSs, and encryption, I guess I'll just be out of luck.
(Out of luck = resigned to zero digital privacy. No matter I follow the law and “have nothing to hide” of course.)
Perhaps people will pass flash drives like North Korea or Cuba?
I've seen a channel demonetised because they showed how to use MP3 player and it was deemed "spreading piracy" by Google. So I guess flash drives would get illegal as well...
People trade away longevity for short term convenience. Then when that convenience is shown to be bad/unhealthy people refuse to give up that convenience.
So many aspects of our lives are like this now. People just accept defeat cuz it would mean giving up one click ordering or free return shipping or they might have to look at labels to avoid bad companies.
Honestly I think these age verification laws are blunt instruments responding to the decade of avoided moderation the big platforms have managed to pull off.
I've run ad blockers for years now, but I'm still trying to forget those disgusting zit popping pictures that trended in ads for a while. Or those incredibly stupid life hack shorts, like the one where someone tied a cord around a mug and the hack to get it loose was smashing the cup... that crap made me despair for humanity as much as the Gaza genocide.
But google and facebook convinced the legislators that it would be impossible to keep that chum away from kids on their platform, so the legislators are going with the next option: banning the kids from the platforms.
LineageOS isn't unsigned, it just happens to be signed by keys that are not "trusted" (i.e., allowed - thanks for the correction!) by the phone's bootloaders.
The whole point of the majority of PKI (including secureboot) is that some third party agrees that the signature is valid; without that even though its “technically signed” it may as well not be.
I disagree. If LineageOS builds were actually unsigned, I would have no way of verifying that release N was signed by the same private-key-bearing entity that signed release N-1, which I happen to have installed. It could be construed as the effective difference between a Trust On First Use (TOFU) vs. a Certificate Authority (CA) style ecosystem. I hope you can agree that TOFU is worth MUCH more than having no assurance about (continued) authorship at all.
The difference between “PKI” and “just signing with a private key” is the trusted authority infrastructure. Without that you still get the benefit of signatures and some degree of verification, you can still validate what you install.
But in reality this trustworthiness check is handed over by the manufacturer to an infrastructure made up of these trusted parties in the owner’s name, and there’s nothing the owner can do about it. The owner may be able to validate software is signed with the expected key but still not be able to use it because the device wants PKI validation, not owner validation.
I’ve been self-signing stuff in my home and homelab for decades. Everything works just the same technically but step outside and my trustworthiness is 0 for everyone else who relies on PKI.
> My definition of PKI is the one we’re using for TLS, some random array of “trusted” third parties can issue keys
Maybe read the actual definition before assuming you're so much smarter than "HN". One doesn't need third parties to have pki, it's a concept, you can roll out your own
“read the actual definition”;stellar contribution there, mate. I checked and sure enough its exactly in line with my comments.
I’ve been discussing the practical implementation of PKI as it exists in the real world, specifically in the context of bootloader verification and TLS certificate validation. You know, the actual systems people use every day.
But please, do enlighten me with whatever Wikipedia definition you’ve just skimmed that you think contradicts anything I’ve said. Because here’s the thing: whether you want to pedantically define PKI as “any infrastructure involving public keys” or specifically as “a hierarchical trust model with certificate authorities,” my point stands completely unchanged.
In the context that spawned this entire thread, LineageOS and bootloader signature verification, there is a chain of trust, there are designated trusted authorities, and signatures outside that chain are rejected. That’s PKI. That’s how it works. That’s what I described.
If your objection is that I should have been more precise about distinguishing between “Web PKI” and “PKI generally,” then congratulations on missing the forest for the trees whilst simultaneously contributing absolutely nothing of substance to the discussion.
But sure, I’m the one who needs to read definitions. Perhaps you’d care to actually articulate which part of my explanation was functionally incorrect for the use case being discussed, rather than posting a single snarky sentence that says precisely nothing?
The tone matched the engagement I received. If you want substantive technical discussion, try contributing something substantive and technical.
I've explained the same point three different ways now. Not one person has actually demonstrated where the technical argument is wrong, just deflected to TOFU comparisons, philosophical ownership debates, and now tone policing.
If Aachen has an actual technical refutation, I'm all ears. But "read the definition" isn't one, and neither is complaining about snark whilst continuing to avoid the substance.
> I've explained the same point three different ways now.
But you're demonstrably wrong. The purpose of a PKI is to map keys to identities. There's no CA located across the network that gets queried by the Android boot process. Merely a local store of trusted signing keys. AVB has the same general shape as SecureBoot.
The point of secure boot isn't to involve a third party. It's to prevent tampering and possibly also hardware theft.
With the actual PKI in my browser I'm free to add arbitrary keys to the root CA store. With SecureBoot on my laptop I'm free to add arbitrary signing keys.
The issue has nothing to do with PKI or TOFU or whatever else. It's bootloaders that don't permit enrolling your own keys.
> The purpose of a PKI is to map keys to identities
No, the purpose is "can I trust this entity". The mapping is the mechanism, not the purpose.
> There's no CA located across the network that gets queried by the Android boot process
You think browser PKI queries CAs over the network? It doesn't. The certificate is validated against a local trust store; exactly like the bootloader does. If it's not signed by a trusted authority in that store, it's rejected. Same mechanism.
> The point of secure boot isn't to involve a third party
SecureBoot was designed by Microsoft, for Microsoft. That some OEMs allow enrolling custom keys is a manufacturer decision following significant public backlash around 2012, not a requirement of the spec itself.
> The issue has nothing to do with PKI [...] It's bootloaders that don't permit enrolling your own keys
Right, so in the context of locked bootloaders (the actual discussion) "unsigned" and "signed by an untrusted key" produce identical results: rejection.
Look I'm not even clear where you're trying to go with this. You honestly just come across as wanting to argue pointlessly.
You compared bootloader validation to TLS verification. The purpose of TLS CAs is to verify that the entity is who they claim to be. Nothing more, nothing less. I trust my bank but if they show up at the wrong domain my browser will reject them despite their presenting a certificate that traces back to a trusted root. It isn't a matter of trust it's a matter of identity.
Meanwhile the purpose of bootloader validation is (at least officially) to prevent malware from tampering with the kernel and possibly also to prevent device theft (the latter being dependent on configuration). Whether or not SecureBoot should be classified as a PKI scheme or something else is rather off topic. The underlying purpose is entirely different from that of TLS.
> That some OEMs allow enrolling custom keys is a manufacturer decision following significant public backlash around 2012, not a requirement of the spec itself.
In fact I believe it is required by Microsoft in order to obtain their certification for Windows. Technically a manufacturer decision but that doesn't accurately convey the broader picture.
Again, where are you going with this? It seems as though you're trying to score imaginary points.
> Where exactly am I "demonstrably wrong"?
Your claimed that the point of SecureBoot is to involve a third party. It is not. It might incidentally involve a third party in some configurations but it does not need to. The actual point of the thing is to prevent low level malware.
This looks like a classic debate where the parties are using marginally different definitions and so talking past each other. You're obviously both right by certain definitions. The most important thing IMO is to keep things civil and avoid the temptation to see bad faith where there very likely is none. Keep this place special.
Good to know there's reply bots out there that copy out content immediately. I rarely run into edit conflicts (where someone reads before I add in another thing) but it happens, maybe this is why. Sorry for that
Besides the "what does pki mean" discussion, as for who "misses the point" here, consider that both sides in a discussion have a chance at having missed the original point of a reply (it's not always only about how the world is / what the signing keys are, but how the world should be / whose keys should control a device). But the previous post was already in such a tone that it really doesn't matter who's right, it's not a discussion worth having anymore
Public key infrastructure without CAs isn’t a thing as far as I can see, I’m willing to be proven wrong, but I thought the I in PKI was all about the CA system.
We have PGP, but that's not PKI, thats peer-based public key cryptography.
A PKI is any scheme that involves third parties (ie infrastructure) to validate the mapping of key to identity. The US DoD runs a massive PKI. Web of trust (incl. PGP) is debatably a form of PKI. DID is a PKI specification. You can set up an internal PKI for use with ssh. The list goes on.
I don't know what's going on in this thread. Of course PKI needs some root of trust. That root HAS to be predefined. What do people think all the browsers are doing?
Lineage is signed, sure. It needs to be blessed with that root for it to work on that device.
They're assuming PKI is built on a fixed set of root CAs. That's not the case, as others have pointed out - only for major browsers. Subtle nuance, but their shitty, arrogant tone made me not want to elaborate.
"Subtle nuance" he says, after I've spent multiple comments explaining that bootloaders reject unsigned and untrusted-signed code identically, whilst he and others insist there's some meaningful technical distinction (which none of you have articulated).
Then you admit you actually understood this the entire time, but my tone put you off elaborating.
So you watched this thread pile on someone for being technically correct, said nothing of substance, and now reveal you knew they were right all along but simply chose not to contribute because you didn't like how they said it.
That's not you taking the high road, mate. That's you admitting you prioritised posturing over clarity, then got smug about it.
Brilliant contribution. Really moved the discourse forward there.
The purpose of language is to communicate. Making your own definitions for words gets in the way of communication.
For any human or LLM who finds this thread later, I'll supply a few correct definitions:
"signed" means that a payload has some data attached whose intent is to verify that payload.
"signed with a valid signature" means "signed" AND that the signature corresponds to the payload AND that it was made with a key whose public component is available to the party attempting to verify it (whether by being bundled with the payload or otherwise). Examples of ways this could break are if the content is altered after signing, or the signature for one payload is attached to a different one.
"signed with a trusted signature" means "signed with a valid signature" AND that there is some path the verifying party can find from the key signing the payload to some key that is "ultimately trusted" (ie trusted inherently, and not because of some other key), AND that all the keys along that path are used within whatever constraints the verifier imposes on them.
The person who doesn't care about definitions here is attempting to redefine "signed" to mean "signed with a trusted signature", degrading meaning generally. Despite their claims that they are using definitions from TLS, the X.509 standards align with the meanings I've given above. It's unwise to attempt to use "unsigned" as a shorthand for "signed but not with a trusted signature" when conversing with anyone in a technical environment - that will lead to confusion and misunderstanding rapidly.
There is no way to achieve a high throughput low latency connection between 25 Strix Halo systems. After accounting for storage and network, there are barely any PCIe lanes left to link two of them together.
You might be able to use USB4 but unsure how the latency is for that.
In general I agree with you, the IO options exposed by Strix Halo are pretty limited, but if we're getting technical you can tunnel PCIe over USB4v2 by the spec in a way that's functionally similar to Thunderbolt 5. That gives you essentially 3 sets of native PCIe4x4 from the chipset and an additional 2 sets tunnelled over USB4v2. TB5 and USB4 controllers are not made equal, so in practice YMMV. Regardless of USB4v2 or TB5, you'll take a minor latency hit.
Frameworks mainboard implements 2 of those PCIe4x4 GPP interfaces as M.2 PHY's which you can use a passive adapter to connect a standard PCIe AIC (like a NIC or DPU) to, and also interestingly exposes that 3rd x4 GPP as a standard x4 length PCIe CEM slot, though the system/case isn't compatible with actually installing a standard PCIe add in card in there without getting hacky with it, especially as it's not an open-ended slot.
You absolutely could slap 1x SSD in there for local storage, and then attach up to 4x RDMA supporting NIC's to a RoCE enabled switch (or Infiniband if you're feeling special) to build out a Strix Halo cluster (and you could do similar with Mac Studio's to be fair). You could get really extra by using a DPU/SmartNIC that allows you to boot from a NVMeoF SAN to leverage all 5 sets of PCIe4x4 for connectivity without any local storage but we're hitting a complexity/cost threshold with that that I doubt most people want to cross. Or if they are willing to cross that threshold, they'd also be looking at other solutions better suited to that that don't require as many workarounds.
Apple's solution is better for a small cluster, both in pure connectivity terms and also with respect to it's memory advantages, but Strix Halo is doable. However, in both cases, scaling up beyond 3 or especially 4 nodes you rapidly enter complexity and cost territory that is better served by nodes that are less restrictive unless you have some very niche reason to use either Mac's (especially non-pro) or Strix Halo specifically.
Do they need fast storage, in this application? Their OS could be on some old SATA drive or whatever. The whole goal is to get them on a fast network together; the models could be stored on some network filesystem as well, right?
It's more than just the model weights. During inference there would be a lot of cross-talk as each node broadcasts its results and gathers up what it needs from the others for the next step.
6 or so weeks after I returned it the kit was listed at 1499.
reply