I don't dislike notebooks, but I definitely have one foot in the "don't use notebooks for serious software" camp. I recently worked on a project where we're trying a notebook as the development platform for a data visualization report. The designer uses the notebook + bokeh to iterate on the report, and then we use nbconvert with some environment variables to create reports for different datasets.
My biggest issue with this paradigm is we actually had a lot of problem getting notebook development to work consistently bug-free on the various environments we have (Windows, Linux, VScode vs in-browser Jupyter, etc). It seemed like it would've been so much easier to just use a vanilla python script that generates the html report files. With hot reloading the iteration could be just as fast.
The other issue is that everything was horribly slow with the amount of data we were dealing with (~150MB of json). This is probably more related to python/bokeh than the notebooks themselves, but it meant that re-executing some cells was painful and would often hang or block the IDE.
I did run into some problems with nbconvert from time to time. It's worth noting that nbdev doesn't use nbconvert at all - it uses Quarto instead (which AFAICT does everything nbconvert does plus a lot more, and is faster, more extensible, and more "batteries included").
Having said that, it's possible that, given your experiences with needing to re-run some slow cells and having trouble making that work well, you might prefer to use "pure Quarto" instead of nbdev. With Quarto you can write your report as a .qmd file directly: https://quarto.org/ .
Personally, I quite like the notebook environment for situations like this where there are some really slow cells -- I mainly do deep learning, and some of my cells take many hours to run -- since that state is cached and I can easily manipulate it and visualise it afterwards. I generally will then add some kind of serialization or caching once it's working so I don't have to re-run the slow bits every time. I'll often also use nbdev to export a .py script from the notebook so it's easy to re-run the whole thing from scratch.
(BTW we also released something today that's particularly helpful for this workflow: https://fastai.github.io/execnb/ . Basically, it's a parameterised notebook runner. It doesn't rely on Jupyter or nbclient or nbconvert. It's in the same general category as Papermill, but it's much more lightweight and requires learning far fewer new concepts.)
My biggest issue with this paradigm is we actually had a lot of problem getting notebook development to work consistently bug-free on the various environments we have (Windows, Linux, VScode vs in-browser Jupyter, etc). It seemed like it would've been so much easier to just use a vanilla python script that generates the html report files. With hot reloading the iteration could be just as fast.
The other issue is that everything was horribly slow with the amount of data we were dealing with (~150MB of json). This is probably more related to python/bokeh than the notebooks themselves, but it meant that re-executing some cells was painful and would often hang or block the IDE.