That's certainly one platitude that contains some of the same keywords as the article.
Except that programmers work in a synthetic world. We shouldn't have to "trade off" (nor should we find this acceptable) unless we actually encounter some limitation from the real world (e.g. finiteness of computer memory or CPU cycles, or developer time). I would argue that most of the time, we are tilting at windmills instead of solving things the way would we like to solve them.
Of course programming languages involve trade-offs. Concrete syntax trade-offs abound, but more important are semantic trade-offs. Many are of the form feature vs. performance. Dispatch? Memory safety? Laziness? Otherss are more complicated, like those involving type systems, which are formally complex while affecting the informal "feel" of the language.
I think it's specious to suggest that the problems in programming language design and implementation are illusionary, and that we could somehow code "the way would we [sic] like". The language of computation is extremely subtle, and the process of building abstractions over our computers' basic operations is still more subtle. We are not done exploring possibilities---nor, I suspect, will we ever be.
> I think it's specious to suggest that the problems in programming language design and implementation are illusionary
That's definitely not what I was trying to suggest. I was suggesting that we do not need to accept "trade-offs" as an inherent property of programming, and that we can strive to do better. Sure, programming languages are designed within a context, but the number of inherent limitations upon us as programming language designers are fewer than those placed on real-world engineers. Gravity, mass, cost and tensile strength of certain materials are fixed constants, yet building skyscrapers and wide-body jets are a practical reality. What have we got to contend with? Incompleteness? Bah!
So my reply was not intended to say "programming language design is not hard", rather, "we should not be so quick to accept unnecessary trade-offs". The original post was complaining about problems which have been (largely) solved in other languages. The reply about seeing trade-offs implied that working around limitations is acceptable and part of an engineer's lot. I disagreed, suggesting that most of these "limitations" are imposed by limitations of thought, not limitations in what we can actually achieve.
The best languages for writing quick-and-dirty 15-minute scripts are never going to be the best languages for million-line, mission-critical applications, the best languages for embedded programming are not going to be the best languages for web programming, and the best languages for beginners are not going to be the best languages for more advanced programmers.
So perhaps programmers sometimes give up too quickly and could do better on all those fronts, but it's not some limitation of the real world that forces the tradeoff: it's the fact that different decisions regarding things like syntax or the type system do have a real impact on the suitability of the language for different purposes, and you can't be all things to all people there. Sometimes you want a very loose language that doesn't get in your way; sometimes you want a very strict language that ensures correctness; sometimes you want a language that lets you manipulate things close to the metal; sometimes you want a very simple language that's easy to learn because it doesn't have too many concepts; sometimes you want a very advanced language that gives you lots of different ways to solve problems.
No language can or should be all things to all people.
Of course . . . you can also use a flat rock to hammer in nails when framing a house, but a framing hammer might be a better idea. Doesn't mean it can't be done, though, especially if you're really good with rocks and already have them lying around, whereas you'd have to go to the hardware store just to buy the framing hammer.
My point was not "language X can't do project type Y," but rather that the things that put a language in the sweet spot for certain types of projects kind of inevitably pull it out of the sweet spot for other types of projects. I.e. it really is always about trade-offs. Sometimes the mere fact that you know how to use a given language is enough of a win to outweigh any potential benefits of switching to another language.
While I'm not a big fan of Tcl/Tk, I'm quite familiar with it and find I this statement confused or misleading in at least two ways:
First, GWL once told me the bulk of the code is actually rather mundane GUI form definitions (the sort of crap that is trivial to generate with high level interface tools). There's only a small amount of actual "logic" at that level that ever needs to be maintained (the real control isn't done in Tcl).
Second, even if there are 50,000 lines of Tcl (actually I think the actual number is closer to 300,000), it doesn't mean it's one big script. It's probably more fair to compare it to a unix system with hundreds or thousands of separate smaller write-once scripts.
Ironically I think the real fault here lies with GWL and other Tcl enthusiasts for quoting a true but misleading "number of lines" in the first place. The number of separate scripts and average script length would be more meaningful and representative of the system's real complexity.
The general purpose language has been very successful because there's sufficient commonality between the operations that a lot of programs enact (what is good for one is good for the other) but the DSL has been also, perhaps a lesser extent, because of the benefits of specific optimisations you mention.
"If I wanted to put an 'If' statement in a 'While' statement, I don't have to worry about whether the 'If' statement will oscillate at a certain frequency and rub against the 'While' statement and eventually they will fracture. I don't have to worry whether it will oscillate at a certain higher frequency and induce a signal in the value of some other variable. I don't have to worry about how much current that 'If' statement will draw and whether it can dissipate the heat there inside that while statement. Whether there will be a voltage drop across the while statement that will make the 'If' statement not function. I don't have to worry that if i run this program in a salt water environment that the salt water may get in between the 'If' statement and the 'While' statement and cause corrosion. I don't have to worry when I refer to the value of a variable whether I am exceeding the fan-out limit by referring to it 20 times. I don't have to worry, when I refer to the variable, how much capacitance it has and whether there has been sufficient time to charge up the value. I don't have to worry when I write the program, about how I am going to physically assemble each copy and whether I can manage to get access to put that 'If' statement inside the 'While' statement. I don't have to worry about how I am going to gain access in case that 'If' statement breaks, to remove it and replace it with a new one. So many problems that we don't have to worry about in software. That makes it fundamentally easier. It is fundamentally easier to write a program than to design a physical object that's going to work. This may seem strange because you have probably heard people talking about how hard software is to design and how this is a big problem and how we are going to solve it. They are not really talking about the same question as I am. I am comparing physical and software systems of the same complexity, the same number of parts. I am saying the software system is much easier to design than the physical system. But the intelligence of people in these various fields is the same, so what do we do when we are confronted with an easy field? We just push it further! We push our abilities to the limit. If systems of the same size are easy, let's make systems which are ten times as big, then it will be hard! That's what we do! We make software systems which are far bigger in terms of number of parts than physical systems. A physical system whose design has a million different pieces in it is a mega project. A computer program whose design has a million pieces in it, is maybe 300,000 lines, a few people will write that in a couple of years."
I wonder when it is you don't encounter limitations from the real world. I certainly couldn't type a line of code without my computer interacting with the real-world.
If by "real-world" you mean "sending signals to a display within certain timing constraints", sure.
My point is that software is inherently virtual, therefore there are far fewer constraints placed on its shape, size and smell than any similarly elaborate construction in the real world. For example, if I want two cogs to interact with each other in the real world, they have to be physically adjacent. If I want to things to interact in software, I just say so and it is done.
And that is what gives programming language designers the freedom to do a much better job. Most of the languages in common use today are the way they are because they are essentially exposing an underlying architecture with some syntax over the top, or doing the same at one degree removed. Just because we've always done things in a particular way doesn't mean that we need to keep doing them that way.
Which goes back to my original point that we have essentially artificially constrained ourselves by insisting that anything we do be reasonably similar to everything we've done before.
Except that programmers work in a synthetic world. We shouldn't have to "trade off" (nor should we find this acceptable) unless we actually encounter some limitation from the real world (e.g. finiteness of computer memory or CPU cycles, or developer time). I would argue that most of the time, we are tilting at windmills instead of solving things the way would we like to solve them.