I think you're both talking about different sides of the same problem.
Visualize a single SSD cell as a bucket of electrons. If it's full of electrons, it stands for 1. If it's empty, it stands for 0. Reading this is fast and unambiguous - there's either lots or none. (Single Level Cell type)
Now visualize a bucket that can be filled with no electrons, a third full, two-thirds full, or full of electrons. Now you've got 4 possibilities, so you can encode 2 bits! (Multiple Level Cell type)
Most consumer SSDs use the latter kind, MLC. It's slower to read and the results have to be amplified with better error correction, but you get so much more storage per chip that it's usually worth doing it this way.
The problem is that once you've cranked each of these cells down to 18nm or whatever, you're talking about holding and measuring (at most) 100 electrons per cell. What's that, 0, 33, 66, and 100 electrons? Crank it down even further and you can hold even less.
I think I pulled the numbers out of my ass, but the idea is essentially correct. We're close to the point where it's too difficult and error-prone to get a good read on a cell, requiring too much error correction to make engineering sense and rendering the MLC technique infeasible.
Also going below 18nm is going to be a pain in the ass for other reasons. So it's sort of a mutual dead end.
All of the vendors are still using power-of-two MLC (i.e. 4 or 8 levels) right? I wonder when we'll start seeing 3 level (1.58 bits/cell) or 5 level (2.32 bits/cell) MLC start appearing? It's more complex to visualize, but the controller already abstracts the details of the storage quite a bit so it's not hard to imagine it spreading your 512 bytes across 2585 instead of 2048 cells. Some implementations are already doing compression so you could even have the algorithm's output be a tristate stream instead of a binary one if that's marginally more efficient for the silicon.
Another thing I wonder is how much neighboring cells interfere with each other. If that's the case, the most efficient data packing might even rely on bit encodings for multi-cell groups that avoid the combinations that are most likely to cause interference (similar to the 64b/66b encoding in gigabit ethernet, for example) Again, the controller has to do this type of thing already to implement ECC so it seems like a straightforward extension.
Note that I'm not saying that either of these techniques avoid the "dead end" you're talking about -- they're both just ways of squeezing a tiny bit more out of the density/reliability curve at the margins. I'm just imagining how complicated flash controllers might get as they try to capture the last bits of life in the technology.
Visualize a single SSD cell as a bucket of electrons. If it's full of electrons, it stands for 1. If it's empty, it stands for 0. Reading this is fast and unambiguous - there's either lots or none. (Single Level Cell type)
Now visualize a bucket that can be filled with no electrons, a third full, two-thirds full, or full of electrons. Now you've got 4 possibilities, so you can encode 2 bits! (Multiple Level Cell type)
Most consumer SSDs use the latter kind, MLC. It's slower to read and the results have to be amplified with better error correction, but you get so much more storage per chip that it's usually worth doing it this way.
The problem is that once you've cranked each of these cells down to 18nm or whatever, you're talking about holding and measuring (at most) 100 electrons per cell. What's that, 0, 33, 66, and 100 electrons? Crank it down even further and you can hold even less.
I think I pulled the numbers out of my ass, but the idea is essentially correct. We're close to the point where it's too difficult and error-prone to get a good read on a cell, requiring too much error correction to make engineering sense and rendering the MLC technique infeasible.
Also going below 18nm is going to be a pain in the ass for other reasons. So it's sort of a mutual dead end.