If the compiler can prove there isn't action-at-a-distance between those lines (this might be non-trivial), then can't the destructor be called before running bar ? Does the C++ spec necessarily say that destructors are called at the end of the block, compared to, let's say, as soon as the variable is no longer used?
> If the compiler can prove there isn't action-at-a-distance between those lines (this might be non-trivial), then can't the destructor be called before running bar?
Yes; there is an "as-if" rule in the C++ standard that says that the implementation must emulate only the "observable behavior" of the abstract machine defined by the standard. The observable behavior is: access through volatile lvalues, data written to files, and I/O to interactive devices.
Does the C++ spec necessarily say that destructors are called at the end of the block, compared to, let's say, as soon as the variable is no longer used?
Yes, as long as the destructor has any side effect:
> If a variable with automatic storage duration has initialization or a destructor with side effects, an implementation shall not destroy it before the end of its block nor eliminate it as an optimization, even if it appears to be unused, except that a class object or its copy/move may be eliminated as specified in 11.10 [1].
Other wise things like std::lock_guard[2] would not work; that's an example of a type that is never explicitly used after definition, but used purely for the side-effects of its constructor/destructor.
For the experts in this thread: is there any benefit to using these so called array languages compared to using something like numpy (or even pandas/polars) ?
The short answer is yes. There have been many presentations on this topic that tries to explain it in various ways.
The problem is that most people who are unfamiliar with APL usually don't see the larger picture, and you need to learn the language before understanding the reasoning. But once you understand it, you don't really need to hear the arguments anymore.
One argument that may be easier to digest is that the very optimised syntax allows you to easily work with the data in an interactive fashion. This is similar to how a calculator that forced you to write 1.add(2) would be rather painful to use, even if it functionally is the same as 1+2.
In programs that you save to a file and is part of a larger project, this benefit is of course less relevant.
For me, the feeling of being well-designed or expertly crafted is what sets the array languages apart. Learning one concept often (intuitively, for me at least) extends to many other parts of the language.
For example, in Q the comma operator concatenates arrays:
q) 1 2 , 3 4 5 // returns 1 2 3 4 5
...but it also merges dictionaries (duplicated key `x gets the new value):
q) (`a`b`c!1 2 3),`c`d!4 5
a| 1
b| 2
c| 4
d| 5
...and also joins tables by row. Sure this is "just" operator overloading, but it's so deeply ingrained in the language it doesn't feel jarring or bolted-on like in other languages.
Building a program is less about crafting bespoke abstractions and more about using the existing building blocks, which leads to a semantic uniformity that's rare to find in other languages. Or at least, it's easier to get your job done using only built-in features and not have to resort to custom abstractions.
mostly yes, at $work we're trying to move away from pandas entirely in favour of polars. Polars is mostly faster, with an API that's actually sane and makes sense. No reason to use pandas nowadays.
Don't criticise people for making certain decisions years ago when those don't match what you'd choose to do now. Often you'll find that they were very reasonable given the constraints at the time.
Also the spec will have evolved over time with changes that would have been made under constraint of the existing system, which tends to produce things that are not as nice compared to something that was designed from the get-go to support the features. This is something that's seen very often in software engineering, and are probably partly a reason why long-lived codebases tend to be dumpster fires in general.
Calling them 'very biased and not very smart' is not very constructive.
That's not to say that the wheel format isn't a dumpster fire (I'll have to take your word on that), or hasn't morphed into one with time & revisions.
Are we talking about not criticizing Wheel format?
Because, if so, I'm not buying it. Wheel is an iteration after Egg, that was created in a world full of package managers, packages of all sorts and flavors. Wheel authors failed to learn from what was available for... idk some odd thirty years? (I'm thinking CPAN).
But, it has problems that just show how immature the people who designed the format were when it comes to using existing formats. For example, the Wheel authors were completely clueless about multiple gotchas of Zip format (even though they've been using Egg which is also based on Zip for... what a decade? I mean, come on, you had to be blind and deaf not to know about these problems if you had anything to do with packaging).
But, the most important problem is in the name format. And it's not about knowing gotchas of other formats. It's just total lack of planning / ability to predict the next step. For instance, some parts of the Wheel name are defined roughly as "whatever some function in sys module returns on that platform". So, it leaves this part of the name unpredictable and undefined, essentially. Wheel authors cannot make a universal package because in order to do so they need to have knowledge of all existing platforms and all future platforms... which, of course, nobody does.
And they've done it because... it was easy to do. Not because it was the right thing to do or the smart thing to do. The consequence of this decision is that implementing a PyPI competitor is virtually impossible because it's a layered crap-sandwich of multiple layers of mistakes that support each other (various parts of the name format were modified multiple times over the course of history, and weren't immediately supported by pip). Similarly, implementing a viable alternative to pip is equally almost impossible because of the same historical crap-pie of multiple mistakes on which Python package publishers built their whole infrastructure.
This led to the situation where today the whole Python packaging is locked into using PyPI, setuptools and pip. Those who are intimately familiar with the subject know that they are broken beyond repair and have no hope of getting better, but the mess is so big that undoing it just seems impossible. And, of course, PyPA is blissfully unaware of all the nonsense that's going on in its tools keeps adding new worthless features to polish this turd.
I would expect most people to read code in their IDEs, where small amounts of type inference like this is fine because the IDE tells you what the type is.
I agree that if you spend a lot of time reading code in something like GitHub, not having explicit types is annoying, but seriously, who does that?
If the IDE only displays the type on hover, that’s a significant usability regression.
It also makes it harder to grep for usages of any given type. Of course IDEs could help with that too, but I don’t know any that provide that functionality.
IDEs don't always display every type in a useful way. Sometimes they just display an unhelpful alias which ends up being just as useless. Sometimes they display an incredibly verbose version with all the default template parameters written out, making it a nightmare. Sometimes they give useless results, like when you have a dependent type whose concrete type you know (imagine typename T::value_type vs. size_t). Sometimes they haven't even finished analyzing the code yet. I could go on. You should not be crippled without your IDE.
Can you compute elementary functions by hand? Why not? Why are you crippled without semiconductors? This attitude leads to never being able to use better tools. We can't leverage an IDE because then we're "crippled" when we don't have it, so we continue writing code as if it was the '70s and the best we have is ed.
Good point! I remember that being a lint in Visual Studio. A very valid one.
In contrast, in Python, initial use exhausts generators. Subsequent iterations turns up empty. A gotcha, but also a way to highlight misuse, as it should show up in testing.
Because that IDE is compiling the code behind the scenes... (Usually somewhere on a spectrum between literally just using the real compiler, or failing spectacularly to match it.)