Oh yeah, imports in Python are not just, like, extending a namespace like in many other languages. They, at runtime, go and run the module's __init__ and can have arbitrary side effects - an entire program can run (although usually shouldn't) just in the import. Imports of large modules often take entire seconds.
It is absolutely worthwhile to avoid unnecessary imports if possible.
I know they _can_ have side-effects, I’ve just never seen a case where it actually mattered, and I have used Python professionally for 10 years. So I’m curious if this is more common in ML libraries or something.
I guess it depends on your definition of "side-effects" but it definitely comes up in common ML packages. For one example, importing `torch` often takes seconds, or tens of seconds, because the import itself needs to determine if it should come up in CUDA mode or not, how many OpenCL/GPU devices you have, how they're configured, etc.
It wouldn't surprise me if the original reason is the pervasive use of jupyter notebooks in ML, which don't adhere to normal python conventions, and are affected by slow imports only when those sections are explicitly evaluated.
Side-effects in imports are, in my opinion, unnecessary, losing some of the benefits of static analysis, running with different parameters during tests, compiling to native code (if those tools exist), slowing things down, and more.
Libraries could have an initializer function and the problem would go away.
It is absolutely worthwhile to avoid unnecessary imports if possible.