Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Hm, is this really "crippling" AMD? Seems more like Intel submitted a performance patch that is only enabled for Intel processors, but could be extended to support AMD too.

There's a moral difference. It is wrong to intentionally degrade the performance of your competitors. It is not wrong to not do something that benefits others.



The point is that the correct way of doing this is to check for the feature not for the vendor. It's perfectly legitimate for Intel to submit code that helps Intel CPUs, but glibc shouldn't be accepting code that unnecessarily favours one CPU vendor. The correct version of this code would just check for the feature rather, so that if AMD does support this it just works.


While I completely agree with the principle, there may be practical issues for the glibc maintainers. Suppose, for example, that the use of the feature in a given way is an optimal solution on Intel processors but not on AMD? In that case, releasing the code might also be seen by some as enabling a campaign by Intel against AMD. If the glibc project does not have the resources to check, perhaps the best option is to start with a conjunction of feature and vendor checks, with the latter being expanded to include all vendors that have endorsed the change.


It might just be the easiest way to do it. It's also possible that Intel didn't have access to samples of enough AMD stuff to test, or the wrong department had it, or they couldn't get approval to take the time it would require. The GCC guys almost certainly lacked the hardware, time, and inclination to do this instead. It's not malice, I don't think, so much as practicality. I'm sure they'd accept an AMD patch enabling this where available.


Intel and AMD should provide GCC maintainers hardware. It could easily come out of the marketing budget, because benchmarks compiled using gcc would be affected, and they use benchmark numbers in advertising.


> Intel didn't have access to samples of enough AMD stuff, or the wrong department had it, or they couldn't get approval to take the time it would require

I seriously doubt that.


You'd be surprised how bad communication can be in a big company the size of intel. I'm sure they have plenty of samples of AMD stuff, but I could believe they're being used mostly by some cpu-testing division, rather than the open-source one.


I wouldn't be surprised that communications were bad within Intel. I also wouldn't be surprised if there was zero political will to make sure their contribution worked properly on AMD processors. Combine the two and you get whatever plausible deniability you want to use to deflect the argument, I suppose.


The optimization could be a slowdown on other chips. AMD uses narrower vectors then Intel for one.


> but glibc shouldn't be accepting code that unnecessarily favours one CPU vendor.

why ? AMD just has to do the same. I want to be able to use the CPU I buy to its maximum capability.


The key here is 'unnecessarily'. If it it works just as well on an AMD CPU it should be enabled on an AMD CPU regardless of who submitted it.


but it is to AMD engineers's job or at least people willing to run benchmarks on AMD cpus, to ensure that "it works just as well on an AMD CPU", not to Intel.


No.

With an open source project like this, every contributor has a responsibility towards all users of that code. That means that if Intel makes a change, they are also responsible for AMD support. Putting a generic optimization behind a vendor detection is not acceptable in the community.

However, as others' have mentioned, this previously might not have benefited AMD, despite technically being supported, so it was likely meant well at the time of implementation. Intel is usually quite good players when it comes to open source contributions.


They didn't harm in any way the AMD users, just didn't invest in their well being. You can't expect intel to benchmark if a feature would be better or not on amd, that's not its role. It submitted a patch that was good for its users. Up to amd to benchmark it for their users. It would still be a lot less expensive to benchmark it and enable it for amd, than develop it from scratch as intel.


> You can't expect intel to benchmark if a feature would be better or not on amd, that's not its role.

Why not? If I was maintainer I wouldn't accept the patch, I'd ask the authors to test on AMD as well. Intel is well funded and glibc is not their project. If they want glibc to include optimizations for their platform they can do the minimal level of effort to see if it can be enabled on AMD as well.


And then, after said minimal amount of effort, it turns out that the patch actually hurts performance on some AMD CPUs, and people bring out pitchforks for Intel intentionally slowing down AMD?

It seems to me that the current approach is the most conservative: limit the impact to what you know best. Leave it to the experts of the other system to do whatever is needed.


I don't understand your scenario. The minimal amount of effort includes a benchmark, so it wouldn't be enabled if it hurt performance.


How many AMD CPU configurations exist? Does a minimal effort require regression testing all of them? On how many benchmarks? How do you know that there aren’t untested potholes in AMD CPUs that you may not be aware of?

In the end, the result of this feature is a performance improvement for some and status quo (no perf regression) for everybody else. It’s a net benefit with no downside in absolute terms.

In addition, it provides a free roadmap for AMD or its users on how to get the same benefit as well.

The potential backlash of enabling this for AMD as well by somebody of Intel (“active sabotage!!!”) is much larger than this tempest in a teacup where AMD is currently missing out on something.


The same benchmarks you used to justify adding it in the first place, on their current model and ideally a slightly older one, would be sufficient.

> The potential backlash of enabling this for AMD as well by somebody of Intel (“active sabotage!!!”) is much larger than this tempest in a teacup where AMD is currently missing out on something.

I disagree.


Historically Intel has been found to create suboptimal code paths in its own compiler, in addition to not enabling features supported on competitor's platform, in its own compiler.

What happens if the implementation is slower but still works on AMD? Is Intel responsible for also performance testing and determining if/when to disable an implementation on a given chip? You're putting a lot of burden on Intel to do extensive testing and also not protecting them from criticism if a change is suboptimal for a competitor.

I think it's fine to submit a patch that's known to be good for a subset of CPUs and perhaps it should be tagged for another maintainer (e.g. AMD) to review and contribute to as well.


If the original patch contained a benchmark showing that it was slower on the current AMD processors then this would be a reasonable argument.

It did not, because no such testing was done.

I think there's a pretty low baseline of effort they can put in. I also think that if they enabled new code paths for newer instructions that turned out to be slower on AMD, very few people would claim this was Intel being evil. Most would blame AMD for selling a defective product.


I would argue it's absurd ask for Intel to maintain test platforms for a competitor.

At best you could ask them to flag platform specific code in such a way that others, who are better equipped, can test against other platforms.


That's true. Whoever accepted the patches should have thought of that. But that happens, right? I don't think this is newsworthy at all.

Bad implementations happen, people find better ways, they refactor and improve things. This holy war some people want to have between AMD and Intel is quite boring.


> You can't expect intel to benchmark if a feature would be better or not on amd, that's not its role.

It is, in facts, its role as a major contributor.

And they do, in fact, benchmark and fix AMD as part of its open source efforts, as they should. Case in point: https://github.com/OpenVisualCloud/SVT-VP9/pull/48 (Intel developer contributing AVX2 improvements, specifically mentioned as AMD Epyc improvements).

They can cater only to their own devices when the code only applies to those devices (e.g. the i915 graphics driver). In generic code paths, they must cater to all users. Adding optimization using generic features that have generic flags, but hiding them behind vendor detection, is borderline malicious. The developers here are well aware that there is a feature flag they should check instead.


> With an open source project like this, every contributor has a responsibility towards all users of that code.

What ? no. Do IBM engineers have responsibility towards Qualcomm engineers when they commit IBM patches under IBM-named flags to the linux kernel ? What happens if there's a new CPU company in two years that would also happen to work fine with these flags ?


> Do IBM engineers have responsibility towards Qualcomm engineers

Yes, in every way. However, with Qualcomm not having any PowerPC architectures, there isn't much harm that could be done. And for reference, Intel also tests AMD as they should, and even submits performance improvements for AMD as they should.

But this is quite bad: Instead of: `if (supports(generic_feature)) { do_with_generic_feature(); } else { slow_approach(); }`, they did `if (intel_haswell) { do_with_generic_feature(); } else { slow_approach(); }`.

The only time where that is acceptable from as big a contributor as Intel, is if they tested and concluded that other CPUs were actually slower using this feature.

> What happens if there's a new CPU company in two years that would also happen to work fine with these flags ?

That is exactly why the feature flags exist in this generic code! You can query what instruction sets are available on the CPU. Vendor-specific code should only be added to deal with product-specific defect.


The glibc core maintainers have such responsibility, yes, but it's too much burden for occasional contributors focusing on a single issue or a particular architecture. Setting the bar too high would make glibc lose valuable patches from both Intel and AMD.

A reasonable compromise is requiring architecture-specific contributions to at least do no harm to other vendors, and to not increase maintenance costs for core developers by duplicating code.

In this light, AMD engineers wanting to enable the haswell optimizations for their processors would be asked to share the existing code rather than copy-paste it. Intel engineers would participate in the public review to ensure AMD patches don't cause regressions for Haswell. If they have contributed testcases, they will demand that AMD patches pass them on all supported architectures before being merged.

This is pretty standard in all open source projects with multiple stakeholders.


> occasional contributors

Intel has made 171 contributions in form of commits to glibc as of master today. I doubt they can be considered an "occasional contributor".

And even then, small contributions only get to bypass the responsibility if we're dealing with small bugfixes.


If intel is intentionally checking based on vendor rather than feature, that is scummy. Glibc should not accept patches that do this check. Instead requiring such checks to be based on feature detection.


Yes, and that's what is happening with this glibc improvement. glibc is part of the C runtime; it is part of what makes C run on a given CPU.

What are you complaining about, exactly? The system is working as it ought to work.


Since AMD has equally no obligation to Intel users, they can submit a patch that replaces Intel's

    if (intel)
        fast();
    else
        slow();
with

    if (amd)
        fast();
    else
        slow();
Or maybe there's a better way.


> It is not wrong to not do something that benefits others.

The patch however sneakily removes the chance that the features existing and working on the competing processors are properly detected and used.

In free software it’s clearly evil.


> The patch however sneakily removes the chance ...

If they haven't tested on other processors, should they leave code in because it has a "chance" to work on something else? I think both sides of this question could be legitimately argued so I wouldn't jump to calling it "evil".


What do you mean `"chance" to work`? If the CPU claims to support an instruction, it's perfectly fine to treat it as supporting the instruction.


So patch it. That's the point of free software: the freedom to modify it to your needs.


This is why a bug is reported, when it is patched, the bug will be closed. This is the point of a bug tracker in a free software project: document the current issues that need to be solved or improved.


Well the patch certainly met Intel's needs (a nice little performance bump over its ascendant rival) ... and since someone with a keen eye spotted it, now the rest of us can submit a patch to correct that. But I'm not sure that a back-and-forth war of self-serving patches is very productive or serves anyone very well.


In this scenario, is not each self-serving patch strictly an improvement? It's at least arguable that a back-and-forth competition to provide improvements serves everyone well.


> But I'm not sure that a back-and-forth war of self-serving patches is very productive or serves anyone very well.

In large FOSS projects you have enough eyes on those patches to ensure they actually converge towards some (unbiased) optimum.


That's a myth. When was the last time you reviewed the technical aspect of heavily specific platform code in the glibc ?


about 3 months ago, in a code audit.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: