I do think not being able to use 32 cores easily is a gap in the current language. In 2017 I rewrote a fairly high performance python dialog to Kafka daemon in Go. The python version require a lot of specialized knowledge to write, using gevent, PyPy, hand optimizing our framework for the hot paths etc, and still was only using a few cores to do work.
The Go was a dead simple my first Go project sort of implementation and used 32 cores and therefore worked much better right out of the gate. (I mean I did have go routine worker pools for each step of the processing, but the division of work into the stages was already in the Python code).
So yeah Python is easy to make less of a mess of for lots of people, until you want to use all your cores (which again means rerunning your things goes from four minutes to 15 seconds on that fancy laptop).
The Go was a dead simple my first Go project sort of implementation and used 32 cores and therefore worked much better right out of the gate. (I mean I did have go routine worker pools for each step of the processing, but the division of work into the stages was already in the Python code).
So yeah Python is easy to make less of a mess of for lots of people, until you want to use all your cores (which again means rerunning your things goes from four minutes to 15 seconds on that fancy laptop).