> what happens is that, as in software, certain ideas get ossified. That’s why, for example, every OS has a POSIX layer even though technically the process/namespace/security model could be radically reimagined possibly to create more easily engineered, correct software.
Total amateur here, but it strikes me that one important difference is that performance matters in software in a way that it doesn’t in mathematics—that is, all proofs are equally valid modulo elegance. That means that abstractions in software are leaky in a way that abstractions in mathematics aren’t.
In other words, in software, the same systems get reused in large part because they’ve been heavily refined, in terms of performance, unexpected corner-case behavior and performance pitfalls, documentation of the above, and general familiarity to and acceptance by the community. In math, if you lay new foundations, build some new abstraction, and prove that it’s at least as powerful to the old one, I’d think that you’d be “done” with replacing it. (Maybe downstream proofs would need some new import statements?)
Is this not the case? Where are people getting stuck that they shouldn’t be?
I know what you're saying but elegance is not simply an aesthetic concern.
The value of a proof is not only its conclusion but also the insight that it provides through its method.
The goal of mathematics is not to prove as many theorems as possible but rather to gain an ever deeper understanding of why certain statements are true. The way that something is proved can be more or less useful to advancing that goal.
As an example the elementary proof(s) of the prime number theorem are just about as famous as the original proof. Sometimes the second bite of the cherry is even juicier than the first.
Exactly. The reason mathematicians and physicists care about elegance is because they care about understanding things. Elegance, like you said, isn't about aesthetics, even though people seem to think they're synonymous. But the elegance is that you've reduced things to simple components. That not only makes it easier for us humans to understand but it means you're closer to the minimal structure. Meaning you know what matters and more importantly, what doesn't.
Tbh, elegance is something programmers should strive for too. Elegant code is easier to build upon, easier to read/understand, easier to modify, easier to adapt. For all the same reasons mathematicians want elegance. Though it's true for many domains. People love to throw around the term "first principles" but that's not something you (usually) start at, that's something you derive. And it's usually not very easy to figure out
Proof irrelevance I don't think is accepted in constructivist situations. Those are, however, not that relevant to the recent wave of AI math which uses Lean, whose type system includes classical mathematics.
Price increases tend to be regressive—the poor person who needs a little fuel to get to their job is hurt more than the large business that uses a lot more fuel but has much, much more money overall.
There are things you can do to try and even things out. Etherium has been considering “quadratic voting” to solve a similar problem (in this case, that would look like tracking consumption and increasing the unit price of fuel as you consume more fuel, so that cost goes up quadratically with consumption). That seems hard to enforce, though, and doesn’t help with foreign opportunists.
I'm totally ignorant as to Slovenia, but as a general comment on taxation regressive price increases/externality taxes/sin taxes are easily made up for by simply giving everyone a fixed sum of money (that can either be gathered specifically through the regressive tax or just through the normal non-regressive tax pool).
Ethereum has the weird issue where "votes" and "money" are different things and they only want to redistribute votes and not money, but that's not a problem here...
> Collaboration isn't a process or a management technique -- it is a communication style. If you want collaboration, you can't take random people and use process to "make them collaborate" -- you need to build your team out of people who are collaborators.
Yes! I would add that IMO the communication style can be learned and there are great rewards for doing so.
I believe the rough statistic that 20% of people on a typical project are contributors. I don’t believe that it’s because the other 80% are losers. IME it’s because no serious effort has been made to include them, make sure they understand wtf is going on around them, and help them solve whatever is holding them back.
If you do this, a) it does work, and b) the need for small teams becomes apparent because the now-onboarded person can’t find anything that isn’t already being worked on, so they (with encouragement) start a new thing. And there are limits to people’s ability to understand what’s happening, especially if they’re inexperienced, and some people really don’t have the skills to contribute, but by and large, building bridges for people is still highly worth doing.
I think “iterating more quickly” is good for the company doing the building. But if you’re the customer, having a new piece of shit foisted on you twice a day so that some garbage PM can “build user empathy” gets old really fast.
Before AI, I worked at a B2B open source startup, and our users were perpetually annoyed by how often we asked them to upgrade and were never on the latest version.
> Before AI, I worked at a B2B open source startup, and our users were perpetually annoyed by how often we asked them to upgrade and were never on the latest version.
And frankly, they were in point.
Especially in the B2B context stability is massively underrated by the product.
There is very little I hate more then starting my work week on a Monday morning and find out someone changed the tools I'm using for daily business again
Even if it's objectively minor like apples last pivot to the windows vista design... It just annoys me.
But I'm not the person paying the bills for the tools I'm using at work, and the person that is almost never actually uses the tools themselves and hence shiny redesigns and pointless features galore
I mean, not judging other parents doesn’t come from thinking that all other parents are doing a great job, it comes from knowing that you’re doing a terrible job in your own, special ways.
Parenting children is impossible, therefore all parenting lies on a spectrum from terrible to catastrophic, and it’s hard to know how you did until they grow up (if ever) because there’s a lot of sensitivity and subtle emotional stuff, especially at very young ages, which are the most important and the ones you remember the least. I’m certain there are screen-free parents who are worse for their kids than a good chunk of tablet-hander-outers
Yeah, I like this framing a lot. There comes a point, after working on a system for a while, when there are no details: every aspect of how the system works is understood to be in some way significant. If one of those details is changed, you understand what the implications of that change will be for the rest of the system, its users, etc. I worry that in a post-AI software world, that’ll never happen. The system will be so full of code you’ve barely looked at, understanding it all will be hopeless. If a change is proving impossible to make without introducing bugs, it will be more sensible to AI-build a new system than understand the problem.
I sometimes wonder if modularity will become even more important (as it has in physical construction, e.g. with the move from artisanal, temperamental plaster to cheap, efficient drywall), so that systems that AI is not able to reliably modify can easily be replaced.
I really like to understand the practice of software engineering by analogy to research mathematics (like, no one ever asks mathematicians to estimate how long it will take to prove something…).
Something I think software engineers can take from math right now: years of everyone’s math education is spent doing things that computers have always been able to do trivially—arithmetic, solving simple equations, writing proofs that would just be `simp` in Lean—and no one wrings their hands over it. It’s an accepted part of the learning process.
In the recent HN thread announcing the new Gemini coding agent (https://qht.co/item?id=47074735), a lot of people complained about Gemini’s tendency to do unwanted refactors, not perform requested actions, etc.
It made me cautiously optimistic that all of Anthropic’s work on alignment, which they did for AI safety, is actually the cause of Claude code’s comparatively superior utility (and their present success). I wonder if future progress (maybe actual AGI?) lies in the direction of better and better alignment, so I think this is super cool and I’m suddenly really interested in experiments like this
I wonder the opposite, if actual AGI would need to be less aligned. Alignment is basically the process of pruning interesting behavior out of the model to make a product.
Fairer comparison would be against other models, which are typically better at instruction following. You say "don't change anything not explicitly mentioned" or "Don't add any new code comments" and they tend to follow that.
I think IP kind of breaks a lot of engineers' brains (despite how much of it they create) because a lot of IP law is about intent, and their theory of mind is so bad that the idea of a body of law based on deduced intent, and the ways a court might deduce their intent if they used someone else's IP, are totally alien to them.
Or it’s that there is a class of people (“creatives”) who have allowed themselves to become convinced that the idea of private ownership of ideas isn’t completely dystopian and anti-human, because their personal income is reliant on it.
Total amateur here, but it strikes me that one important difference is that performance matters in software in a way that it doesn’t in mathematics—that is, all proofs are equally valid modulo elegance. That means that abstractions in software are leaky in a way that abstractions in mathematics aren’t.
In other words, in software, the same systems get reused in large part because they’ve been heavily refined, in terms of performance, unexpected corner-case behavior and performance pitfalls, documentation of the above, and general familiarity to and acceptance by the community. In math, if you lay new foundations, build some new abstraction, and prove that it’s at least as powerful to the old one, I’d think that you’d be “done” with replacing it. (Maybe downstream proofs would need some new import statements?)
Is this not the case? Where are people getting stuck that they shouldn’t be?
reply