Hacker Timesnew | past | comments | ask | show | jobs | submitlogin
Dear Linux Kernel CNA, what have you done? (amanitasecurity.com)
97 points by odood on March 7, 2024 | hide | past | favorite | 107 comments


The purpose of CVEs is to ensure that people discussing vulnerabilities are talking about the same thing. CVEs aren't a checklist, they aren't a perfect enumeration, and it shouldn't matter if a CVE is issued for a nonissue.

People who are burdened by requirements to ship (or produce rolling updates) to address every Linux kernel CVE are living in a state of sin. It doesn't make sense for the kernel CNA to alter its behavior to accommodate them.


My first thought on reading "cybersecurity regulations" was whether this was going to be the EU at the root cause again. The legislation mentioned in the article seems to be 2x EU instruments.

The problem here seems to be fear over how EU legislators and judges will interpret CVEs more than anything the kernel is doing. Looks like a complex and legally risky situation; good luck to the EU devs, hope it stays as a theoretical concern. I imagine common sense will prevail and the law will be interpreted the sensible way.


One possible solution is for people in the EU to set up their own triaging system to triage Linux CVEs (and other CVEs, and maybe other sources of info) in line with EU law. There should be enough people affected willing to find something like this.

I am not clear how the law affects people outside the EU whose software is distributed in the EU (including open source software that is not covered by the exemptions).

>I imagine common sense will prevail and the law will be interpreted the sensible way.

I hope it will, but I would hate to be in the position of having to depend on that.


Largely my thought as well. Not having a kennel CNA would be far more problematic than the approach outlined for the Kernel CNA team.

Also if interpreted correctly it should help mitigate legal risks for EU companies that rely on Linux that update regularly.


It does matter, because they are inputs into other processes, and the signal to noise has gone down. There’ll be a lot more time wasted in orgs triaging non-security-relevant bugs in the future.


> There’ll be a lot more time wasted in orgs triaging non-security-relevant bugs in the future.

The security team instituting those processes only have themselves to blame.

Have had to deal with too many rapid Jackson updates for "if you turn on the insecure mode that nobody turns on that lets the client specify the classes to instantiate and the documentation warns you about and requires a code change to enable, and include library X, then there's a new gadget that does RCE".


Not to mention DevSecOps that only know enough to run certain tools but not understand enough that certain flags don't apply because the canned test doesn't work the same as your app.

In my specific example /auth was reverse procured to a completely separate app, and /auth/login/bad wouldn't show the same content as / ... And even after explaining their test is invalid they still escalate rather than fixing or removing that test. Leaving me to explain 3 more times asking the way.


[flagged]


No, they're not. They're assigning blame to people who are insisting that the CVE CNA's do the work of triaging issues for them without understanding what the issues are and whether they're applicable to their environment. An entirely justified --- and pretty mainstream! --- take on this issue.


It does not matter, because the signal was never there or meant to be there to begin with. CVEs solve a problem of multiple researchers and developers talking past each other about the same vulnerability (or vulnerable subsystem or line of code). It has never been a reliable enumeration of vulnerabilities. Organizations triaging CVEs line-by-line are abusing the system. The system should not bend itself to accommodate that abuse; that just harms everybody else who isn't abusing it.


This reminds me a little bit of the people saying things like "actually, Java is not "True Object Oriented™* because it's not about messaging like in Smalltalk", and things like that. Well, okay, fine, but it seems undisputable that the phrase "Object Oriented" is used to describe OOP as implemented in Java. When millions of people understand a term "wrong", then that becomes "correct" on virtue of it being so widely used. I mean, modern English is just old English spoken badly by Vikings and Normans, right?

That's kind of how things work, and not much you or I can do about it.

So we need to think about "What is practical?" and "what is useful?", keeping these realities in mind, rather than insisting on "this is what it was meant to be".

Personally I think a good start would be to rethink the entire messaging and list them as "high impact bugs you probably want to get fixed ASAP", or something along those lines, rather than "security bugs". This should avoid the whole security wankery with memory issues that perhaps maybe possibly could perhaps someday lead a possible exploit maybe.


CVEs are categorically not "high impact bugs you probably want to get fixed ASAP". If you want that, make a new enumeration.


They're also, equally categorically, not "a list of every bug in every system." If you want that, make a new enumeration.

As 'arp242 says, we need to consider what is useful. Pretending that all CVEs are severe and must be addressed immediately is not useful. Spamming the CVE database with every bug in your tracker is not useful.

Replacing CVEs (and CPEs, which are equally terrible) with something new would be extremely helpful. My question is, who funds that work? NIST currently appear to have NVD resourcing issues, based on the banner on their website.


> When millions of people understand a term "wrong", then that becomes "correct" on virtue of it being so widely used


I don't think this applies here. People literally agree that CVEs are the identifiers beginning with CVE-xxxx published by CNAs. They don't for example, think they're ticket numbers from the Apache bug tracker. The fact that they use them as if the criteria for publishing a CVE was more rigid, doesn't actually make that criteria more rigid.


you’re missing the CVSS aspect which is intrinsically tied to CVE issuance (atleast when issued through the CNA-LR). It’s not _just_ an identifier, it’s a entirely valid and useful tool of triage and classification


For me, CVSS and the people who use it lost all credibility when they asked my team for a urgent update to patch… a pcmcia bug in the kernel of our EC2 instances.


It's so easy to come up with stories about this that you don't really even need examples. I think everybody just sort of understands that if you put CVSS to the test, it would be ludicrously easy to stack two 8.0+ vulnerabilities next to each other with wildly different severity.


CVSS is a Ouija board. Nobody takes it seriously. It was also introduced long after CVEs were. And even if it was meaningful --- it isn't, but stipulate --- it wouldn't change the fundamental point of CVEs.


CVSS as practiced sucks sometimes, the rules around not chaining vulnerabilities to up a score are rarely followed, but as specified it’s actually a good system.

Undercutting my own point though, it doesn’t hurt to rerun a calculation if you think the public vectors is “lacking” or if temporal/environmental metrics matter in your context


I would be interested in seeing a professional vulnerability researcher of any note jumping in here to make a defense of CVSS. I'd rebut, respectfully, if they did. But I don't expect it to happen, despite that there are plenty of researchers on HN.

I feel like I'm on reasonably safe ground when I say that my take on CVSS is a mainstream one in the field.


I've only seen CVSS used by vendors to declare a lower severity rating than is warranted by an earnest understanding of a bug, and bug bounty hunters to do the opposite.

For example, what does Network vs Local vs Physical mean if it's an exploit in a cloud microservice?

Ooh let me consult the tea leaves. What's that? They consider it "Network" even though it's S3 mounted locally as a filesystem? Now that sev:med looks like a sev:crit.

The known alternative to CVSS is to rate severity levels entirely on vibes, and I find vibes to be more accurate.


Maybe you've had bad experiences with some vendors doing analysis however it i documented here: https://www.first.org/cvss/v3.0/user-guide

> Network vs Local vs Physical

Network: It has to traverse the network stack. Adjacent: On the same physical network link, (usually this means the ability to send packets that are lower level than TCP/IP). Local: ability to execute code on the local machine as the starting point. Physical: You need to be able to touch the machine.

I'll be the first to admit that it can be difficult for some new players to correctly score their system. The "AV" refers to the attackers perspective, not how the software is used, this is a common mistake that quite a lot of vendors make.


> Maybe you've had bad experiences with some vendors doing analysis however

I've been on both sides of bug bounty programs over the years.

I've been in corporate meetings where CVSS was summoned to downgrade the severity of high-sev security bugs, when the standard procedure wasn't to use CVSS at all.

I've published my fair share of security bugs.

Hell, I've even talked extensively with Steve Coley about how CVE and CWE intersect with my own experience doing security research.

And that's just some of the stuff I've done under this handle.

My experience with CVSS has consistently shown it to be misused.

Maybe you have enough discipline to use CVSS as it was intended by its designers. The rest of the world does not, by and large.

The main problem with the CVSS is that it's a one-dimensional numeric scale that's meant to measure the kind of complexity that warrants a formal threat model, not a 0-10 rating.


I agree strongly. sev:{info,lo,med,hi,crit}. All you really need.


How do you calculate that? How does the fact it’s an over-the-internet vs. network adjacent only exploitable? This is what CVSS is good for when applied accurately


The fact that every competent organization has slightly different brackets for those levels is only one of the many reasons why CVSS is a joke.


CVSS has consistent rules, but yeah then incentives that make people ignore particular rules (vulnerability chaining being the one that I’ve seen before) makes the public scores questionable sometimes. Still it’s a useful, if imperfect, tool in our industry I think.


Take a look at FIRST‘s FAQ wrt Supplemental Metrics.

It’s so complicated you have to have a degree in CVSS to properly rate a vuln and it’s also highly subjective - which they want it to be.

[1]: https://www.first.org/cvss/v4.0/faq


No, CVSS does not have consistent rules. Even people who support CVSS don't claim it's consistent. It's deliberately designed so that organizations can make it say what they want/need it to.


Can’t speak for others, but I’m talking from some experience here triaging and reporting fwiw. Not that I’m notable :)


I guess it depends on what field you are talking about. I'd say that the typical scores on CVEs can be helpful indicators, but that's really it. I'd agree with you, that everyone(?) in the field knows, that all players game the systems, e.g. Microsoft every patch Tuesday, or someone with a cool name for a vuln and a blog.


Where is upstreams CVSS score ?


Let's continue your reductive line of thinking. If the only purpose is to ensure that developers are talking about the same issue, then why does the kernel need need CVEs at all? Their existing bug tracking mechanisms should be entirely adequate, no?


In the 2019 article that is linked to in the LWN article posted upthread, that's exactly what Greg Kroah-Hartman suggested: use the change ID that the Linux kernel is already using as a vulnerability identifier.

Unfortunately, it does not appear that that proposal gained traction, since it's not discussed at all in the more recent LWN article.


Some platforms do in fact do this. If you run the entire stack, more power to you. But the kernel is forked by a hundred different people and everyone has their own bug trackers for that, so having an identifier for a security bug is actually useful to unify those.


I would have no problem at all with the kernel developers introducing the concept of a "universal bug identifier," and developing a system for cataloguing and managing those UBIDs. I would also have no issue with CVEs being replaced by a set of flags on those UBIDs, which people could take notice of or ignore as they saw fit.

I do have a problem with the kernel devs attempting to burn down the only system we presently have, however terrible it is, and with HN justifying the bonfire on the basis that CVEs were always broken anyway.


"Universal bug identifier" is precisely the point of CVE. They're not "broken" anymore than a WONTFIX bug is.


There's a good-faith community norm that CVEs are for bugs that the reporter believes are security-related. Sure, that norm is regularly violated, but community standards always are and it doesn't diminish their value.


CVEs have been filed for e.g. memory corruption issues with no known exploit or even plausible path to exploit since time immemorial, or at least since time-since-CVE-was-invented. The idea that there is a burden of proof or certainty required to number something with a CVE is a commercial vendor invention.

It's easy to see why people want CVE to work that way! It implies that people numbering potential security issues are doing a fuckload of work for you. But that work isn't free, and CVE has other purposes in the research community. So, no, I don't think anybody is going to talk the kernel people down from this. They're right.

If you want a feed of "CVEs" that clear a plausibility bar, put that together yourself. A lot of people would love to consume it and sell it to their customers; you'll get a lot of uptake.


It's an interesting idea, but I'm not sure the market is there for the "plausible CVE" replacement you mention. We already have EPSS and KEV, and we regularly see attempts to replace CVSS with something better -- Zoom did something recently, as did Vulncheck I think. They don't tend to get much traction.


All the tooling that's been integrated everywhere is reliant on CVEs and CVSS. All vendors issue their vulns with CVEs, not ZoomVEs. Disruption is not likely unfortunately.


Yes, because vendors love the idea that "the community" is doing the job of digesting and distilling security issues for them, and all they have to do is slap a graphical interface on that data to charge $100k/yr to customers. There is absolutely no reason the Linux CNA should dignify that concern.


More importantly, how do you even get the reporters full report, not all vendors will supply this information, a lot of CVE data is lacking especially in closed source vendors.


Just to be clear, that the CVE assigner (CNA) believes are security related, not the person asking.

This is a CNA responsibility.


> the signal to noise has gone down

As far as I can tell, the signal to noise has only "gone down" in the sense of "from really low to really, really low".

> There’ll be a lot more time wasted in orgs triaging non-security-relevant bugs in the future

It seems to me that even before this change there was a lot of time wasted in orgs triaging non-security-relevant bugs, because CVEs didn't carry much useful information before.


> because they are inputs into other processes

CVEs should never be the input to anything except a triage pipeline, which in turn feeds other processes. If you don't have a competent pair of eyeballs (either internally or from a vendor) looking at CVEs with the context of how the impacted product is used in your organization, all you are doing is busy work.

Almost all end user organizations (not software vendors, OS distributors, etc) should pretend CVEs don't exist. Blindly apply all your OS and software patches within 24 hours of them being available and be done with it. You are much more likely to suffer a business loss as the result of a vulnerability than you are a patch application.


I think I agree. Erring in the direction of too many CVEs means that people trying to get bug fixes rolled out won’t need to deal with the dreaded “ok, we understand the problem, but we can’t actually fix it in our systems until it has a CVE” that comes all too often from distributors.


Has any one in this thread actually evaluated all ~320 CVEs since Feb 20? Because I have. And I literally just finished writing a filter script for MITRE's cve.org API output.

1. Most of the new CVEs have potential security concerns. At the very least, they would have been assigned a CVE if they were reported by an external researcher.

2. Many of them don't affect the older LTS branches.

3. I found 1 (1!) CVE that has a remote exploitation possibility. The rest are mostly local privilege exploit or DoS or crashes.

4. This Greg dude (heh) is backtracking to 2021 to flag bugs that have since been fixed. If your kernel repository is even semi up to date, you would have cut down the CVE count by 1/3.

In my company, the majority consensus amongst the kernel developers is 'patch your shit and do rolling release. We are too busy to evaluate all the CVEs.' On the opposite end, are embedded hardware kernel developers who barely had their kernels working on existing hardware. Both sides make good arguments and I don't have any opinion I'd share.

There are other ways to lessen the CVE workload.

1. Disable unused components with defconf or make menuconfig.

2. Don't stay on the bleeding edge branch (ie. 6.x)

3. Implement automatic minor version commit merges

Your mileage may vary, but with these implemented, the workload is managable imo.


> There are other ways to lessen the CVE workload.

> 1. Disable unused components with defconf or make menuconfig.

+1 for avoiding vulnerabilities, but were you saying this lessens the CVE evaluation workload? I'd love to hear about automation for evaluating CVEs based on a kernel config. I've done a fair amount of that manually and I'm not aware of any metadata in the CVE records (or in the CVE json in gregkh's new vulns repo) that includes config metadata.


While I understand the problems raised in this post, I think they're going a bit too far. The CVEs assigned to the kernel were already specific to various parts of it. You're not running linux-x.y.z, but rather linux-x.y.z + specific config. That means vendors already needed to look at CVEs and decide what applies to them and what doesn't. It's up to NVD records to include how likely something is to be a problem and give it some description / score.

Choosing a random selection of CVEs posted so far... they look reasonable. They're actual issues and they'll potentially affect someone.

This reminds me of the cookie banners situation. Many people complain about the cookie banners being visible rather than about the companies doing things that requires them to notify you. Now if you say you care about the published vulnerabilities, you get to actually see them all. And potentially change the policies around how you worked with them. (yes, it's not a great analogy, I'm not blaming linux for having each of those vulnerabilities)


> That means vendors already needed to look at CVEs and decide what applies to them and what doesn't.

So many vendors don't and it's tedious to say the least.


Perhaps people don't care about companies doing it and they don't want to be notified about it?


> Typically, security researchers are held to higher standards when disclosing vulnerabilities. The expectation is that CVEs are assigned for ‘meaningful’ security vulnerabilities, and not for any software fixes that ‘might’ be a security vulnerability.

Maybe that's the aspiration, but it's clearly not the case in practice.

I reported a firefox bug 12 years ago where a malicious SVG could cause a hang - basically a 22-year-old XML bomb, adapted to SVG patterns. My bug turned out to be a duplicate of a 16 year old firefox bug.

No way of stealing user data. No sandbox escape. Not a crash that might indicate a buffer overrun. With a process per tab, it doesn't even crash the browser. It's just a file that takes a very long time to load - and it's not even an image type that user-generated-content sites like facebook and reddit allow you to upload. Reasonably enough, 12 years ago it was triaged as a minor performance issue.

Apparently in 2023, this counts as a CVE.


12 years ago, Firefox wasn't multi process. So your bug would likely freeze the entire browser, including the UI. Considering that, back then, Firefox reloaded all tabs back when you reopened it, it would keep freezing even if you force closed it. Fun times.


I actually kept such an SVG bomb around as a demonstration of how badly you could break browsers for many years, to anyone who claimed they were completely secure and unbreakable.

I should go see what happens if I load it now, since what changed was less that it stopped breaking them and more that I stopped having the conversation with many people...


> Considering that, back then, Firefox reloaded all tabs back when you reopened it, it would keep freezing even if you force closed it.

That was always an option, as I recall. I think a non-default option, too. Not sure when they started adding the question about if you wanted to restore when you started up after a crash/unsafe shutdown.


CVSS 3.1 score is 4.3 (AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:L). (You can somewhat argue UI:N but I don't think it applies in this case.)

Lots of corps would spend a non-trivial amount of effort to remediate something with such a score.


Good. Forcing downstream consumers of open source projects to spend resources on identifying and fixing security issues is not just entirely appropriate, but direly needed.

If you're already paying someone to maintain Linux for you, this shouldn't be causing that much trouble; it might need some contractual adjustments but you're already set up to get a stream of "good" updates. The patch frequency may be higher, but other people already do the majority of the work for you.

If you were just ingesting Linux "for free"… well, tough luck. You're profiting from the work of others already, you don't get to complain about not being spoon fed exactly what you need.

In practice, a small number of commercial entities (likely a mix of commercial distributions and designated security companies) will probably offer "Linux as a service". People could do the same work on their own, but that's not cost effective.

Either way, this shift in responsibilities has been long overdue.


Linux as a service is most of Redhat and Canonical's business models.

grsecurity does this from a security angle specifically - in fact they're boasting about it on their homepage right now (fair enough!)

>Are Your Products Drowning in Linux Kernel CVE Noise?

>We know your products can't be updated every week based off unverified CVE information. Address true risk by protecting against entire classes of vulnerabilites and exploitation techniques. Our Pro Support ensures you make the most of attack surface reduction and our proactive defense in your products.

https://grsecurity.net/


> Known vulnerabilities are in practice defined as ‘something with a CVE’

Then change that definition and stop operating off of it. It has never been correct.


Most people don't get a choice of which legislation/regulations to comply with.


There's plenty of people in that position, but they're all working at huge corporations. Nobody ends up having to chase things like SOC2 and PCI-DSS without getting paid for it.

Why should unpaid volunteers working on the Linux kernel do compliance work for FAANG-sized companies without getting paid for it? If these companies want the reports carefully triaged, they can send some employees to carefully triage them.


And these standards do not say "patch every time there is a CVE, even if you are not vulnerable". For example PCI-DSS says "New security vulnerabilities are identified using industry-recognized sources for security vulnerability information, including alerts from international and national computer emergency response teams (CERTs)", it doesn't specifically even mention CVE.

A vulnerability / bug in an upstream component may or may not end up being a vulnerability in a complex system incorporating that component. A vulnerability in upstream functionality that is disabled (or not used, with no possible path through which an attacker could trigger the upstream vulnerability) in a particular system is not a vulnerability in that system. Standards like ISO27001, SOC2, and PCI-DSS generally ask teams to have processes to discover vulnerabilities, but not necessarily to patch things that are not vulnerabilities in practice.


Nobody (at least, not me) is calling for the Linux foundation to do additional work.

They've taken it upon themselves to assign a CVE to every bugfix, and it's being pointed out that that doesn't seem to be helping anyone.


For another opinion on this topic https://jericho.blog/2024/02/26/the-linux-cna-red-flags-sinc...

Having a large number of new, unscored, CVEs in the Linux kernel is going to make things... interesting. From their lists https://lore.kernel.org/linux-cve-announce/ these just have a CVE and not really enough detail for anyone to assign a score without a lot of additional analysis, which reduces their usefulness.

To an extent it could be suggested they're just exposing an existing flaw in the system (CVSS scores which may be taken to be scientifically applied, are actually just matters of opinion in many cases), but it will cause a lot of problems with automated tooling and compliance.


> Notably, SyzScope has classified 183 bugs out of 1,170 fuzzerexposed bugs as high-risk. KOOBE has managed to generate 6 new exploits for previously non-exploitable bugs.

While the rate is low it does show that some bugs were indeed exploitable without that being known to the kernel devs. If an attacker is willing to invest more time than the kernel devs combing through commits to find vulnerabilities in the some older stable kernel then a big unlabeled pile saying "there's probably a vulnerability in there, go update" is correct.


This way of thinking is how almost everyone approaches CVEs, but is also out of date now. There are millions of open source projects (tens of millions really). This attitude of treating security bugs as some sort of special snowflake isn't realistic

There are easily hundreds of thousands of security vulnerabilities fixed every year that get no IDs because the current process is rooted in security from 1999 (the number is probably way way higher, but you get the idea)

Rather than obsessing over individual vulnerability IDs, we should be building systems that treat this data as one of many inputs to determining risk


Accurately determining risk relies on decent starting data, otherwise you run the risk of Garbage-in, Garbage-out. Whilst things like VEX and EPSS can help, they are based on the starting point that is CVE assignment and CVSS score.

I don't particularly think that CVE+CVSS has been the "right" way to do things ever (definitely not in the last 10 years) but my thoughts don't really matter whilst regulators and governments apply special significance to them, which they do.

Security bugs are special if a regulator can deem you in non-compliance if you have too many of them.

This is of course leaving the whole area of attackers who actively try to exploit them to one side :).


It's possible to take a somewhat unopinionated approach to CVSS, the issue is that such CVSS scores exist in a vacuum, and vulnerabilities exist in environments. It's not possible to really apply a CVSS score to a vulnerability in a specific environment without understanding the vulnerability and more or less ignoring the CVSS score.

In summary, CVSS scores can be very objective, but in those cases they're also worthless.


I cannot really speak to the "Radio Equipment Directive", but what the author claims or implies with regards to the Cyber Resilience Act is not correct.

These Annexes explain the imposed Vulnerability Handling Processes imposed on manufacturers. The EU obviously only speaks about _exploitable_ vulnerabilities, because they know the problems of the CVE system all too well.

Best of all, open source projects are actively excluded by the CRA. [2] "Open source projects will not be required to directly implement the mandated processes described in the CRA. But every commercial product made available in the EU which is built on top of those open source projects will."

[1]: https://eur-lex.europa.eu/resource.html?uri=cellar:864f472b-...

[2]: https://eclipse-foundation.blog/2023/12/19/good-news-on-the-...


CVE DOS - aka denial of service through legislative/regulatory requirements instead of technical attack is going to be fun.

Edit: by that I mean filing bogus report or just non-security related CVEs. That is also reason why a lot of projects are trying to register themselves as CNA (see curl etc).


It's going to be fun when companies pick Windows instead of Linux because it doesn't cause an awful to handle patch cycle in contexts where things have to work within some regulatory bounds (that make pointless updates cost a lot of time, effort and money, maybe even cause risk to human life).


> It's going to be fun when companies pick Windows instead of Linux […] work within some regulatory bounds

You can get FuSa (functional safety) certified Linux; to my knowledge this just does not exist for Windows. There may be other situations where the choice does exist, but considering Windows and Linux widely equivalent in this context is not possible.

> maybe even cause risk to human life

Neither Windows nor Linux are, to my knowledge, certified for SoL (safety-of-life) applications. And to no surprise considering this is close to (but not quite) a mathematical proof your system can't hang/crash/starve, which is pretty much impossible for anything beyond an RTOS with current tooling.


> You can get FuSa (functional safety) certified Linux;

And they're going to ask how much for the recertification for each CVE fixed? I doubt that'd be cheap.

> Neither Windows nor Linux are, to my knowledge, certified for SoL (safety-of-life) applications.

I didn't have exactly SoL applications in mind, there are plenty of other situations where the stability of a system could cause a risk. Be it just an emergency call center server or a field laptop for looking up license plates - can't leave them unpatched (especially with some of the new legislation) but also downtime from poor updates could be really bad.


> And they're going to ask how much for the recertification for each CVE fixed? I doubt that'd be cheap.

FIPS has created an off-kilter perception about "recertification" because they require essentially the entire process when you change a single bit somewhere. Most certifications are not that harebrained.

Also if you need "certified" Linux, you are either already spending resources on it yourself, or paying someone else to do it. This might need adjusting for this new CVE practice, but it's going to be an adjustment and not a reset.

> […] can't leave them unpatched (especially with some of the new legislation) but also downtime from poor updates could be really bad.

Then pay someone to test and deliver.


> Then pay someone to test and deliver.

That's the thing, resources aren't infinite. Linux offloading that work elsewhere will not have a net positive effect.

The path of least resistance will be taken, which is going to be proportionally less QA, if there was any to begin with.


I think I speak for the whole 0day and 1day market when I say "Thank you for this idiocy, Linux community"

In their attempt to get revenge on security researchers for 'being more important', they have made it much easier for us to keep our exploits on the market for longer than before.

The idea that 'only the latest linux kernel is secure' is absolutely bananas, because it completely disregards the fact that legacy systems exist, they power very critical parts of our society, and they cannot be updated (as a general rule). Only components can be updated in-place.

So, again: thank you for the free money and for making exploits have a longer shelf life.


Oh great, now the previously lazy companies will pressure researchers to "not have discovered" bugs due to the legislatively required response to anything put into the system formally... and we will start hearing, "I don't use open source because it involves triage of too many CVEs. Michaelsoft never gives my engineers mandatory work."


This article largely misses the point from the Linux kernel's point of view.

They have always said "Every bug is a security bug". I don't know about more global content, but at Kernel Recipes (2019?) gregkh took a Pixel that was running latest Google security patches that contained all CVEs. Then looked at non-CVE patches he merged in his LTS. And it took him less than an hour to find a DoS vulnerability.

I understand the author's frustration from the Linux Kernel community to not want to classify bugs, but the reality is that a huge portion of the bug fixes are actually security fixes [1], so between requiring 20% of the patches to be merged, and 100%, is there really a point?

The author mentions Cyber Resilience Act, and I believe that Linux Kernel team created this CNA /on purpose/ to have an impact on the CRA. They believe that the only way to have a secure Linux Kernel is to have an up-to-date Linux kernel. (cf https://social.kernel.org/notice/ARWvggnOvXny0CUCIa ). With the CRA enacted, doing such a every-bug-is-a-security-bug CNA is a way for them to enforce their view.

[1] FWIW, my personal opinion there is that this shows that Linux's monolith architecture is getting old, but I see nothing that could reasonably replace it. I think that "the dream" would be to have a LKL-like linux "arch" to compile every driver as an independent process of Hurd, with GKI-like stable-ish ABI.


> They have always said "Every bug is a security bug".

If you can't reason about your codebase to a sufficient extent to actually determine that then something is very wrong.

If everything is a CVE, nothing is. That approach just wastes a lot of time and effort making people not so familiar with the codebase (as the maintainers) do triage.

I hope they get burnt quick by this approach.


> If you can't reason about your codebase to a sufficient extent to actually determine that then something is very wrong.

Linux kernel developers are entirely capable of assessing this. They're just refusing to do it for someone else's definition of a "security bug".

"Every bug is a security bug" means "we fix things when they need fixing, categorizing the fixes is not our job and you'll need to do that yourself".

As such, the current new approach is in fact a concession, there's now a broad pre-categorizing of fixes you can work off.

> making people not so familiar with the codebase (as the maintainers) do triage

You seem to be under the impression that you hadn't needed to do that before. Which, to be fair, worked for a long time. From an engineering perspective this was always a case of "skipping inspections and verification", because the Linux community never agreed to do that work on top of providing the system.

> I hope they get burnt quick by this approach.

How would they get burnt by this? Social pressure from other kernel developers (or even outside) isn't going to have that effect. The only possible influence would be from employers paying for Linux work — in which case it's a perfectly reasonable discussion about spending paid time on security issues.


> Linux kernel developers are entirely capable of assessing this. They're just refusing to do it for someone else's definition of a "security bug".

Then instead of this, don't? It's utterly childish.

> How would they get burnt by this? Social pressure from other kernel developers (or even outside) isn't going to have that effect.

Fewer organisations willing to cooperate with them, for one? Social pressure comes in many forms and shapes, there's no way it won't have any effect.

> The only possible influence would be from employers paying for Linux work

They're going to be paying someone else to provide a clean feed instead of the organization that deliberately hinders these efforts.


> They're going to be paying someone else […]

And that's perfectly fine, it's open source software. Either way someone gets paid to look at the patches, which is my point.

If you want to do it in a cost-effective manner, you'll find other people with the same requirements, since the work result is "shareable".

> […] instead of the organization that deliberately hinders these efforts.

There is no such organization, and it feels like you have very little understanding of the organizational (and funding) structures behind the Linux kernel. I really can't extend my comments into a full-blown explanation of this, sorry.

(No, the Linux Foundation does not perform the role you're implying: they don't currently and likely never will sell a "clean feed".)

> Fewer organisations willing to cooperate with them[…]

I have no data on this but it is entirely reasonable (and I believe it likely) that the current behavior was requested (or encouraged) of involved organisations and people by cooperating organisations and people.


I think you got a bit confused.

> There is no such organization

There is such an organization, the Linux Foundation is the CNA being the hindrance to these efforts. And yes, they won't perform the role, someone else will and they will be paid for it.

For some that's fine, I find it a significant amount of wasted effort, confusion and potential issues.


>> They're going to be paying someone else to provide a clean feed instead of the organization that deliberately hinders these efforts.

You were implying the Linux Foundation is attempting to get paid for providing said "clean feed".

Anyway, this has devolved far enough.

[Ed.: the Linux Foundation isn't even the CNA, shame on me for accepting that without verifying. The actual CNA is kernel.org. https://www.cve.org/Media/News/item/news/2024/02/13/kernel-o... ]


> If you can't reason about your codebase to a sufficient extent to actually determine that then something is very wrong.

The environment where we write critical code the way we do now is very wrong. It's actually not that easy to figure out if something is exploitable or not. What if you add heap grooming? What if you enable another specific feature? What if an application fights for the same lock? What if measuring the time it takes to fail allows you to defeat aslr? People use exploit chains rather than independent ones these days and there are examples of clever cases of single-byte overflows turning into RCE.

Sure, there are going to be cases where you're really really sure something can't be used, because for example the bug only produces a null dereference and an oops. Then someone else comes along and proves you wrong https://googleprojectzero.blogspot.com/2023/01/exploiting-nu...


> The environment where we write critical code the way we do now is very wrong. It's actually not that easy to figure out if something is exploitable or not.

Then the correct approach is not to cause "CVE fatigue" that can cause significant second-order effects. Not to mention the fact that who else is better suited to make that assessment? It's unavoidable that an assessment still has to be made because fundamentally there are use-cases where touching a working system has to have a really good reason. This will result in actually important things not getting patched because not-kernel-experts had to make that decision.

I also can't imagine large vendors being forced to follow a significantly more frequent update cadence also choosing to retain their current level of QA. Best case we're going to get more frequent less tested updates, worst case we're going to deploy an actual vulnerability due to some low-importance bugfix (with an assigned CVE).


CVE is just a identifier. CVSS should assign a score.

I would require all CVE to ha attached exploit demo code. Otherwise it's shouldn't be CVE


This article does a good job explaining the Linux kernel position on cves: https://lwn.net/Articles/961978/

The relevant part:

> Kroah-Hartman put up a slide showing possible "fixes" for CVE numbers. The first, "ignore them", is more-or-less what is happening today. The next, option, "burn them down", could be brought about by requesting a CVE number for every patch applied to the kernel.

They intend to burn down the cve system, and complaining about it is not a plan to stop it.


"burn them down" is kind of a brilliant political move.

on one hand, it's true to the "all bugs are security bugs (and the converse is obviously true)" position.

on the other hand, it will demonstrably cause hassles for what i term the "checklist security" people. "oh noes we only have the budget for N CVEs per release and we must have NONE > $arbitrary-number". and so that's a very good thing because checklist-security like that is far worse than nothing at all.

on the the other-other hand, in more legitimate, well engineered downstream user, a CVE for every bug may conversely actually help real security. "all" you have to do is evaluate every single potential security flaw's real actual applicability and impact for your use case. and if you can't afford that - faking it with a check-the-box mentality is no substitute. but maybe you can afford that? if so better data will probably help, not hurt.

finally, the other concrete improvement is that the people closest to the product will be able to more accurately judge the CVE score and more quickly correct outliers. a local privilege escalation that requires enhanced privileges to execute is not an 8. i'd much rather have 1000 CVEs with appropriate scores than 10 that are massively overrated and stupid. of course, a simple one-dimensional metric is hogwash anyway, but - baby steps.


I just assume that there is always a 0day lurking in my kernel. If you can execute any code on my system I assume it's game over


It is often difficult to assess the consequences of a bug, especially in large and complicated project like an OS kernel. It could take lot of time, and it is easier just to fix the bug and err on the safe side by calling it a potential "vulnerability". Especially when nobody pays a bounty for proof-of-concept.


I wonder how much time discussing if something is or isn’t a CVE has wasted.

As and end user if you don’t have a script that causes an exploit for a given CVE i don’t care. I have had to patch too many systems from hypothetical issues ASAP!! because management wants to look like they know something


Is it true that the Linux Kernel has traditionally deprecated the idea of "security bugs"? I thought the kernel crew took the view that a bug is a bug.

So perhaps this policy is a kind of spoiler response to efforts to require all security bugs to have a CVE allocated.


seems like they are creating noise and chaos in the infosec space to centralize their own role and discretion in "managing" it.

Cynically, I'd bet there are more than a few less technical but academic operators on the board who would run this play.


Recent and related:

Linux Is a CVE Numbering Authority (CNA) - https://qht.co/item?id=39406088 - Feb 2024 (10 comments)

The Linux kernel project becomes a CVE numbering authority - https://qht.co/item?id=39361511 - Feb 2024 (24 comments)


Right now, the vast majority of CVEs reported are bullshit filed by wannabe security researchers for resumé padding. Look at all the useless CVSS 9.8's filed against curl. With LLMs, even more bogus reports get filed every single day.

CVEs assigned to every linux commit are more valid than each and every one of those bogus CVEs. Each and every one of them is associated with an actual change in a security-critical project.

If you want the flood of useless CVEs to stop, you have to clean your own house first.


Bad CVEs elsewhere aren't an excuse.


It's not elsewhere, it's bad CVEs everywhere. Curl is just a particularly good example because they document it so well.


There are many more good and useful CVEs. I'd also kindly request you to suggest a better system.


Filing a CVE used to be a dialog between the researcher, developers, and third-party domain experts. Accepting every random LLM-generated report and granting it a 9.8 score is not useful in any way.

I have to patch hundreds of CVEs in a month, and only a handful are actually valid. The vast majority is "CVSS 9.8: regex complexity explosion in $library" which my project only uses during build. But I've got to patch it, because it's definitely absolutely critical.

While the standard library bug that causes SSL connections to fall back to TLS1.1 instead of TLS1.3 by default is considered WONTFIX and gets REJECTED for a CVE.


That's obviously really unfortunate, but again, what's better out there?


To me the most disingenuous thing here on Linux’s part is that they won’t issue CVEs for vulnerabilities which aren’t fixed. And thus the latest Linux release is always tautologically CVE-free. Magic. Maybe at work I should raise the idea that our Jira bug tickets should have only one possible status: “fixed”. I’m learning from the best.


> Because of this, the CVE assignment team is overly cautious and assign CVE numbers to any bugfix that they identify

Shouldn't this strategy lead to the opposite? By being overly cautious they should only assign CVEs for real demonstrable security issues.


You can think of it as a "fail-safe" situation.

Being cautious here means "it's better to assign a CVE when it's not a vulnerability, than to NOT assign a CVE when it's actually a vulnerability"


I hate to get all meta (no I don’t sorry not sorry), but there is 100% a thing where every word that means (a) some important thing and (b) some less important thing ends up being a word that, for the vast majority of people, carries something like the emotional impact of definition (a) and the actual meaning of definition (b). For example, “literally” now means “figuratively”, “scan” now means “skim”, “authentic” means “expensive”, and so on. It’s basically Gresham’s Law [0] where less-consequential definitions of words drive more-consequential definitions of the same out of the marketplace.

0. https://en.wikipedia.org/wiki/Gresham%27s_law


I think I speak for the whole 0day and 1day market when I say "Thank you Linux"




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: