This can feel insurmountable if you didn't know Sum Types are a thing.
Once you've seen Sum Types it's obvious that you just wanted Option<T> - a sum of None and Some of your pointer / reference / whatever type T.
Thus type T must actually point at / be something, whereas Option<T> can be None, and your APIs can be design accordingly e.g. the function which always adds a Dog to this Kennel should take a Dog, not an Option<Dog> that would be silly, but the function to take the most recent addition to the Kennel back out would return Option<Dog> not Dog because if the kennel is empty there is no latest addition to return.
It feels like the discrepency is where the reference is of "one" - the type acts as the reference for one (and then arguably memory efficiency), vs the reference / pointer.
Said in another way... if it makes any sense (this is at the nexus of philosophy and arithmetic):
Since anything resembling emptiness or length of 0 requires a reference then 0 comes after the one (1, monad).
1 is primordial over 0.
Existence wraps emptiness.
For existence to wrap not emptiness nor 0 it must wrap nil. However no matter how you reference nil you must do so through at least one layer of indirection.
The efficiency problem you mention is just a Quality of Implementation issue.
In Rust for example Option<&T> is literally the same size as &T because the &T is implemented as a non-NULL pointer, so the None value fits in a niche where NULL would go if this was a pointer (this is called the Guaranteed Niche Optimization, many other more elaborate niche optimizations are routinely performed by the compiler but are not promised, however Rust promises the optimization we need here will always happen)
The machine code ends up identical to what you'd get from unsafe languages with raw pointers if you remembered all the necessary checks, however the safe source code written by humans has stronger typing preventing them from writing code that would e.g. put a NULL Dog in a Kennel, an easy mistake in the unsafe languages.
Some languages solve just this one narrow problem by having specifically "Nullable" and "Non-nullable" types, plus some way to get the Non-nullable from a Nullable (with a branch if it's null) essentially Option but built-in to the language. But wait, as well as this Option feature you also want Result which is another Sum type. Should you add a special built-in for that too? Some languages choose to do so. OK, and how about ControlFlow? Poll? There will be others. I believe that languages should just suck it up and offer Sum types.