There are real benefits to be gained though from new compilers. Look at "Circle" by Sean Baxter as an example. Outside of supporting a powerful form of compile time computation and allowing things such as reflection and writing shaders directly in C++, benchmarks have been floating around showing that it compiles quite a bit faster than GCC and Clang whilst still being able to compile big projects such as Boost. Although LLVM is the backend, the front-end is written from scratch by one guy.
Would anyone be able to make some brief comparisons between Rust's asynchronous model and the model followed by Asio in C++ (on which C++23 executors/networking is based) and if there are any parallels with the sender/receiver concept (https://www.youtube.com/watch?v=h-ExnuD6jms)?
I've seen a few comments talking about the choice of Rust adopting a poll model as opposed to a completion model. Am I correct in assuming these are the same things as a reactor (kqeue/epoll etc.) vs a proactor (IOCP) and that a proactor can be implemented in terms of a reactor.
Perhaps because that standard design didn't work out?
Even Switzerland is shutting down nuclear plants and they managed to build the longest railroad tunnel of the world without budget overruns and half a year before due date.
That's not so clear cut. Mühleberg near Berne has said that 2019 they didn't make profit, however the operator company BKW never published gains with that plant.
Somewhat related but C++20 ranges do not replace iterators and instead build a higher level of abstraction upon iterators. Ranges are implemented in terms of iterator pairs.
Yeah. That actually sucks; often iterators doesn't fit the iteration model of some sequences, but ranges would fit right in if they didn't require separability into two iterators.
Since C++20, the end iterator doesn't have to be the same type. Instead it can be something like a tag with no own runtime data, and comparing with it can be implemented as asking the other iterator if it is 'out of data'.
Yeah that's definitely an improvement in some respects, but (a) generic code still has to deal with 2 iterators, (b) generic code now has to allow them to be different types (so old code will have to be rewritten), (c) it doesn't change the fact that iterators are still treated like cheap objects (generalized pointers) and hence the begin iterator is still potentially expensive to store and copy around regardless of what you do with the end iterator, and (d) it's still a fundamental model mismatch relative to the actual problem, just not quite as unpalatable as before. Ranges fundamentally change the calculus around iteration.
P.S. (e) Is this sentinel-based approach even composable? If your iterator needs to use other iterators underneath, what do you do? You have to store a 'full' iterator anyway... or now you waste even more space and time storing a discriminated union. The fact that a range represents the whole sequence in one object instead of two makes composition intuitive and trivial.
Regarding the comment on discriminated unions, I've actually found the opposite with the new ability to have separate types for sentinels. Before, since both iterators had to be the same type, the end iterator needed some special casing to know it was the end. Allowing it to be a separate type meant all the logic for traversing the underlying sequence could be in the iterator type, and the sentinel could be an empty struct like a tag type used for comparisons only.
The iterator and sentinel do not need to be the same type, they just need to be equality comparable. If you wanted to represent a infinite range you could implement it in terms of a regular iterator and an empty sentinel type to which the regular iterator compares false to.
It isn't a sentinel is it? He said no runtime data, so it sounds like some kind of empty tag for the type system. I'm not sure if it is related to what he is describing, but C++20 also added [[no_unique_address]] for something to do with truly empty members.
I thought he was saying the `end` in range begin/end was now allowed to be some empty signifier to tell things to look for the null pointer (null is a sentinel, but `end` itself is now empty). That may not be right, I don't know enough about the new concepts stuff and range seems to be part of that.
Yes but I'm saying this is kind of moot when you're writing a struct that needs to store the iterators, because it can't get away with just storing the 'begin' -- you can't guarantee the second iterator you're storing will actually be the end. Meaning this iteration model isn't efficiently composable.
It is not so clear to me what you mean. Maybe you can point to a better version (maybe what some other language does), and I can explain how I think the same can be done with C++20 ranges. I believe ranges are strictly more powerful than what other languages provide, and often they can be slimmer/more efficient for the common case.
Bear in mind the buffer only represents a chunk of intermediate results fetched from the OS; when we reach its end, we have to request more data from the OS.
If you had to make an iterator for this, what would you do? You'd basically need to either (a) create iterators that at least duplicate the entire state of DirectoryChildren on the stack, or (b) make iterators allocate on the heap. Both of these suck, because (1) iterators are passed around and copied all over the place on the assumption that these are fundamentally cheap operations, which is an assumption that easily breaks (like here), and (2) the iterators fundamentally do not have any meaning or utility as separate entities from the range itself.
This functionality already exists in the C++ standard library as std::filesystem::(recursive_)directory_iterator with the default constructor creating a sentinel value. This is solution (1) and this is due to the restriction that prior to C++20 iterators were required to be the same type. C++20 relaxes this restriction and allows the sentinel to be any type as long as it is equality comparable with the other iterator. Since on *nix systems readdir is used under the hood, the sentinel can simply be an empty type that is only equal when the actual iterator's last cached dirent pointer is equal to NULL. The sentinel itself can be an empty type.
Why do you keep repeating the same comment? Are you reading my replies? I already addressed exactly what you're talking about: https://qht.co/item?id=25492756
Python and C++ are joined at the hip now primarily due to scientific computing and AI/ML adjacent fields. I feel the growth of Python has helped C++ grow too.
I feel like the fastest growing and most dominant area for C++ is scientific computing and AI/ML. Although many people write these programs in more user-friendly languages like Python and R, most of these programs call through to C++ frameworks (which many people use directly too), so in a way C++ has latched itself to the growth of such languages. Furthermore many major accelerated computing platforms (frameworks?) such as CUDA, Intel's oneAPI and Khronos' SYCL are focused on and committed to the C++ language and ecosystem and I don't really see much competition in this space for the time being.
Its funny that you mentioned games as an example of parallel computation because I'd argue they're some of the hardest programs to parallelize effectively since they don't generally involve that much bulk-processing of read-only data.
Parellism in C++ is most often used for scientific applications and other forms of mass number crunching. It's really easy to just throw a "#pragma omp parallel for" on a loop and call it a day but of course that would also apply to C and Fortran and is somewhat limited. Parallelism libraries like Intel TBB which I'm most familiar with are very easy to use and performant. I think there's a large problem in the reluctance of educators to use libraries to teach parallelism and people always dive straight into locks, threads and atomics which are really not the way to approach parallel computing if you're looking to do parallel computing and not looking to implement parallel primitives yourself (i.e DIY tasking-system or lock-free queue)
Focusing on TBB, it facilitates efficient parallelism by providing high-level canned algorithms such as parallel_invoke, parallel_reduce, parallel_for and parallel_do which anybody who claims to know C++ should be able to use easily. It also provides a task-graph which is great for more complex processing pipelines (things like join/split, fan-in/out and queueing). If you need more low level control you operate at the task level and TBB provides customization points for that. There's other libraries out there which provide similar functionality and even the STL in C++17 provides basic parallel algorithms such as transform (equivalent of map in other langs), reduce and many others.
> Parallelism in C++ is most often used for scientific applications and other forms of mass number crunching.
There are two aspects. I agree to the aspect that today, C++ is used often for such tasks.
Now, what are the reasons why C++ is used dominantly in this domain? I think more or less the only reason is performance. The good performance is what makes the authors of such libraries put up with the disadvantages of C++.
However, I think it is also true that Rust allows for a more concise and safe formulation of the computation (also Scala, for example, which serves somewhat different purposes). For scientific applications, correctness matters, and knowing that the code which compiles does not has hidden memory errors and data races is extremely valuable, because it can save a ton of time.
Also, if there are still differences between Rust and C++ performance, they are minor. In many cases, Rust is faster.
Now, if authors of scientific computing libraries were to come to the conclusion that there exist an alternative which produces at least as fast code, provides better and safer support for parallelization, and is with some learning easier to work with, why should these library authors continue to use C++?
Of course, nobody is going to ditch a large C++ project like Eigen over night and rewrite it in Rust. There is too much inertia for that. Also, GCC support is still lacking. However, one can expect that the number of new projects which use Rust is going to increase and the project which are successful there will blaze a new trail. For something like Python extension modules, users of these libraries do not need to know anything about Rust.
Also, some nitpick. C++ is used in important scientific libraries. However, many essential libraries such as Numpy are written either completely in C, or use C interfaces, because C++ does not has a stable ABI and Python uses the C ABI. This would make a switch to Rust pretty easy. In fact, I think the impulses in this domain will come first from researchers and analysts which start to write small Rust extension for Python which use the C ABI and integrate with Numpy, for example.
https://www.circle-lang.org/