It just seems like it's talking about the usual ongoing issue of, "we're near the cutting edge of lithography techniques", but that's kind of a given (you always build on the best lithography method that can scale, which tends to be fairly close to the best lithography method period). We've been in a similar state for CPU's and RAM for decades now. Is there something different about SSD's?
It's different this time (and I cringe saying that), because we're rapidly approaching fundamental limits to scaling down silicon.
The concerns in the past have been mainly to do with lithography; eg, when the feature size of the silicon went below the wavelength of the light we were using, we had to make masks that utilized difference patterns. This is a mere manufacturing problem.
But now we're getting to fundamental limits. Even if we had the ability to place the atoms however we wanted them, there's an intrinsic limit. You can't make a transistor out of half an atom.
We already hit a wall with frequency; for the longest time, it looked like speeds would go up and up. It's not an apples to apples comparison, because the Pentium 4 had a long pipeline, but a 3.8ghz Prescott was released in 2005 - which is exactly the maximum turbo frequency of the 2011 Sandy Bridge 2600k I'm typing this now on. Ivy has it beaten by just 100mhz.
Now, that's not to say that computation will stop progressing. But it's not going to look like last year's CPU just smaller for much longer; some pretty fundamental changes are going to have to be made. Dynamically reconfiguring memristor circuits are what excites me, but it's just as likely to be something else instead.
As far as flash memory in particular goes, I'm no expert, but cell durability is falling substantially with each shrink (on average; Intel bucked the trend with their 25nm flash), and so the usable limit to feature size may come more quickly than with standard transistors.
But the industry has managed to push through walls that seemed just as intrinsic before, so I wouldn't bet my life savings on it.
This is true, but the way you wrote it ignores the fundamental problem - it's not we can't make transistors switch any quicker, it's that doing so causes such an increase in temperature that we risk damaging the device. That's why you can read about overclockers using things like liquid nitrogen to run chips at 8 GHz.
Cooling mechanisms like microchannel cold plates and, as we continue with 3D-ICs, interlayer cooling, can allow for higher frequencies.
I don't think better heat extraction would really change that much for today's CPUs (certainly when we head towards 3D chips it will become critical).
Gate delays are smaller at low temperatures; those LN2 overclocking runs aren't just fast because of efficient heat dissipation from the CPU, they're fast because the chip is being actively cooled to below room temperature.
So while heat dissipation is a factor, we're also close to the electrical limits as well. Otherwise water cooling (replacing the stock heat spreader) would get closer to LN2 runs. ALUs run at higher frequencies than the rest of the chip, but they're designed to do so (you'd have to shorten the gate pathways like a P4 to do that to the entire chip).
But ultimately, performance per watt is almost universally optimised for these days. It's critical in servers, laptops, mobile phones - The demand for 6ghz, 300W CPUs would be limited to workstation chips, even though we could probably engineer them to be reliable.
Power consumption is always going to increase super linearly with respect to frequency, probably as a fundamental property of any method of computation we use.
A few years ago, there was talk of producing chips with three dimensional stacks of NAND cells (for example, http://www.semi.org/en/node/38361?id=sgurow0811 ). Has there has been any movement on this front? Every article I can find on the subject is a year old or more. This strikes me as the ideal way around the limits of individual NAND cell size, with an obvious proviso that drastically new manufacturing techniques would need to be perfected before this limit is hit.
If it can be commercially produced, that would definitely set us up for a while for flash (CPUs would be thermally limited); though interestingly, not really that long!
If each layer was 50 nm high, and you built the chip up to an unrealistic 1 centimeter high (eg, a 1 cm^3 chip instead of 1 cm^2), chosen because that would pretty easily fill a 2.5" drive, that would give you:
(1 centimeter) / (50 nanometers) = 200,000 times today's capacity.
Which is only only 18 doublings, or 36 years more of Moore's Law (assuming the pessimistic 24 month end), or roughly the gap between a Commodore 64 and a decent laptop today. Some people still working in the industry have gone through a larger increase. I've gone through a 1000 fold myself, and I'm only 25.
There are sure to be a whole bunch more we can do to get more capacity, but it's pretty mind blowing to think that the theoretical limits to storage are within our lifetimes on an exponential scale. So as much as Moore's law hasn't failed us yet, it certainly will at some point (probably in the form of the doublings themselves exponentially taking longer and longer).
They're only within our lifetimes if you require the devices to be the size of modern day silicon chips. There's nothing preventing us from building bigger devices - I mean, my laptop and smartphone's SSDs already essentially act as caches for much larger remote storage and compute hardware.
And given that nature has managed to cram this amazing sentient device into a space the size of our skull, using a pretty inefficient design process, I'd say the problem will be not the quantity of the building blocks, but how they're organized :)
If we were willing to accept SSDs being as unreliable as human memory, we could increase capacities by an order of magnitude with current technology. In fact, if SSD controller design weren't so tricky, someone would have taken advantage of this already to build a pretty decent enterprise-scale caching system.
> But now we're getting to fundamental limits. Even if we had the ability to place the atoms however we wanted them, there's an intrinsic limit. You can't make a transistor out of half an atom.
I believe though that limit is well below 18 nm. Last I heard, they'd done a transistor with 1.5nm, and they weren't saying that was the limit. I'm not sure what the magic is with 18nm, but I'd sure like to know.
I've been under the impression that NAND will someday be replaced with something like MRam. Not sure how far of that is or how it's progressing, would be interested in knowing if someone knows more about this.
I read an article somewhere (sorry, don't have reference to hand) that suggested it was diminishing returns from error correction vs number of levels per cell rather than lithography limits per se.
I think you're both talking about different sides of the same problem.
Visualize a single SSD cell as a bucket of electrons. If it's full of electrons, it stands for 1. If it's empty, it stands for 0. Reading this is fast and unambiguous - there's either lots or none. (Single Level Cell type)
Now visualize a bucket that can be filled with no electrons, a third full, two-thirds full, or full of electrons. Now you've got 4 possibilities, so you can encode 2 bits! (Multiple Level Cell type)
Most consumer SSDs use the latter kind, MLC. It's slower to read and the results have to be amplified with better error correction, but you get so much more storage per chip that it's usually worth doing it this way.
The problem is that once you've cranked each of these cells down to 18nm or whatever, you're talking about holding and measuring (at most) 100 electrons per cell. What's that, 0, 33, 66, and 100 electrons? Crank it down even further and you can hold even less.
I think I pulled the numbers out of my ass, but the idea is essentially correct. We're close to the point where it's too difficult and error-prone to get a good read on a cell, requiring too much error correction to make engineering sense and rendering the MLC technique infeasible.
Also going below 18nm is going to be a pain in the ass for other reasons. So it's sort of a mutual dead end.
All of the vendors are still using power-of-two MLC (i.e. 4 or 8 levels) right? I wonder when we'll start seeing 3 level (1.58 bits/cell) or 5 level (2.32 bits/cell) MLC start appearing? It's more complex to visualize, but the controller already abstracts the details of the storage quite a bit so it's not hard to imagine it spreading your 512 bytes across 2585 instead of 2048 cells. Some implementations are already doing compression so you could even have the algorithm's output be a tristate stream instead of a binary one if that's marginally more efficient for the silicon.
Another thing I wonder is how much neighboring cells interfere with each other. If that's the case, the most efficient data packing might even rely on bit encodings for multi-cell groups that avoid the combinations that are most likely to cause interference (similar to the 64b/66b encoding in gigabit ethernet, for example) Again, the controller has to do this type of thing already to implement ECC so it seems like a straightforward extension.
Note that I'm not saying that either of these techniques avoid the "dead end" you're talking about -- they're both just ways of squeezing a tiny bit more out of the density/reliability curve at the margins. I'm just imagining how complicated flash controllers might get as they try to capture the last bits of life in the technology.
It is low, but controller technology has come on as well. cxtreme systems have a 2xnm endurance test going on at the moment and a 256 GB Samsung 830 has got to 2.4 PB written so far and is still going strong (at an average rate of over 200 MB/s).
What I would like to know is what are the expected future prices? Are they expected to continue to fall, level off, or is this expected to be a temporary low?
Has anyone data on the price development of Apple SSDs?
Since newer Macs doesn't allow for an SSD change, the initial SSD price is very important. I suspect that the fact that you have to max out your SSD on purchase if you want to use your Mac for a longer time, SSDs in Macs have actually become more expensive.
In the past, you could buy a Mac with a small SSD or even a HDD. After one or two years, you could replace your small SSD or your HDD with an up-to-date SSD and could usually benefit from lower storage prices …
Oh, I hadn't realized that even though the ram is soldered on the SSD is on a relatively-easy to remove card. It's the only thing that looks feasible to upgrade.
As someone who bit the bullet and bought at 256g SSD in January, yes, I've noticed this rather dramatic drop recently. I paid about $420, and could get a 512 gig for $399, IIRC, going by last week's pricing. Ugh.
I wished I'd waited a bit, as switching again to a 512 would be... I dunno - not sure if it would be worth it again, but I am running out of room a lot on a 256.
The perceptual performance increase from a SSD in a laptop or a desktop is tremendous, and there's no going back once you make the leap. Hopefully, HDDs will shortly follow DVD-ROMs down the memory hole for portables.
SSDs bring a different set of assumptions to the table than spinning hard drives. What would an OS designed from the ground up in the world of SSDs and the cloud look like?
SSDs are fast, but they're still orders of magnitude slower than RAM in terms of latency. Keep in mind, back in the day, the spread between dram and spinning disks wasnt as bad as it is today.
The next great performance challenge may be the so-called "memory wall" wrt the performance of CPUs vs the performance of RAM. Id be curious to see what that would do to performance if it underwent the same dramatic improvement as nonvolatile storage.
We may actually see them both happening at once; one of the things R. Stanley Williams talks about in his excellent memristor talk is a method they developed for stacking hundreds of chips on top of a processor, which he says may end up providing just as much benefit as the memristor itself.
The possibilities that could be opened up by having a terabyte or more of extremely fast non volatile storage attached by a bus as large as you care to make it directly on top of a CPU are mind boggling.
Then when you consider that you can use them as FPGAs for computation instead and dynamically reconfigure them... wow.
Memristors are only a year or two away from commercial availability, though it will probably be a while after that until they live up to the hype. We live in exciting times.
FWIW, although it is no doubt not an apples/apples comparison[1], todays faster SSDs (~500MB/s) have roughly the same bandwidth as pentium-era PC-66 ram, while todays faster DDR3 is >12GB/s (which i guess doubles with dual channel). So its not 100% crazy.
I wager it would look mostly the same. The reason I say this is because there would still be a distinction between the CPU's memory, the system memory, and then "disk". So long as the IO hierarchy exists, I wager the OS design would more or less be the same.
Now, when memristors come about which have compute+massive memory, then we will need a new OS.
Agreed on the first part; SSD is still orders of magnitude slower than RAM, which is really what matters in the distinction between memory and storage, from a computing point of view. The rest is just implementation details (SSD has wear leveling, HDD has park and resume, etc)
S3 isn't where the SSDs will go, it's the EC2 instances. Many, many, people would pay those numbers for local, dedicated SSDs when their instance is running a database.
Seagate and Western Digital are way too busy making money hand over fist selling hard drives :)
A lot of the SSD companies aren't really making that much money. The fabs to make the memory chips are huge capital investments and suppliers have historically been bad at managing supply and demand leading to volatile pricing.
Hence the current drop in prices cited in the original article.
And harddrives cost exactly the same as they did in 2011 (probably depends on which drive you look at, a 3 TB drive cost about the same and a 4 TB drive costs more now).
Apple charges way more than market rate for storage and memory upgrades. A stick of memory that costs $40 on Newegg might cost $200 from Apple.
In the past the sensible option was to buy a Macbook with stock memory / storage and upgrade it yourself, but of course that's no longer possible with the rMBP. Still, that says no more about the actual cost of SSDs than the price of a hotel room says about beds. :)
It seems flash drives go for about $1 a gigabyte or less. If the prices drop in half twice more they will be down to the $0.25/gigabyte I was paying for hard drives when I first started tracking prices in 2008. At that time, SSDs were $9.38 per gigabyte.
So where's my 1TB solid state disk to replace the 256GB one I've been using in my laptop for 3 years now? It seems as though all innovation in storage stopped in 2009, and we've been coasting ever since. And don't even mention there's still no 4TB hard disks. My blu-ray rip collection isn't getting any smaller and our storage systems aren't getting better anywhere near fast enough.
You don't need a solid-state drive for your movies. That's just wasteful. Mechanical hard drives are fine for data that will only be read, and only sequentially. Plus, they're still 10x cheaper, even given the currently inflated hard drive prices and record low SSD prices. There simply isn't a market for a 1TB consumer SSD. What you're really looking for is a hybrid drive with 64+ GB of Flash.
EDIT: Also, at the end of 2009, TRIM support was just hitting the market, and SandForce-based drives were just being announced. Since then, TRIM has become universal, as has 6Gbps SATA support, and most controllers have been through at least one other iteration. SSD caching has also hit the market in a variety of forms.
Hence, hybrid drives. Better yet, save yourself some money by ripping out the optical drive in your laptop and use that space for a second internal drive.
I seriously need a 1TB SSD (which either don't exist or are still way too expensive) for my laptop because I am constantly moving files off my local system and onto my file server which could also seriously use some 4TB drives, which also don't exist. I am running out of space on each of my systems faster than I ever remember.
If the stuff you're moving around is going over a network, then you obviously don't need SSD performance for it, since your laptop doesn't have a NIC capable of more than 1Gbps. If you can't fit two drives in your laptop, then you are exactly the kind of user that hybrid drives are targeted at.
I'm moving stuff to network drives all the time because the SDD drive capacity of my laptop is too small. Even after 3 years there still isn't anything on the market that is an acceptably priced replacement. Like I said, where is my $300 1TB SDD that should have been on the market by now?
3 years is a really long time for an entire industry to be totally stagnant. It's somewhat similar to how Intel has been sitting on their asses for the past few years ignoring the entire mobile computing revolution.
The SSD market has been anything but stagnant for the past three years. That's a ridiculous assertion for you to be making - three years ago, the SSD market was still immature and not at all ready to be mainstream. The first halfway-decent SSD (Intel X25-M) is only about four years old, and it's nowhere near competitive with current SSDs for price, capacity, or performance.
You seem to be under the false impression that the NAND is the most important part of an SSD. It's not. From an engineering perspective, it's the least important component - it just happens to be the primary reason for cost scaling with capacity. The controller and it's firmware are far more complicated, and make all the difference for performance. Those components have made a lot of progress in the past three years. And even the NAND has advanced, just not exponentially, because while density is inversely related to unit cost, it is also inversely related to durability, and the drives 3 years ago weren't designed with excessive longevity requirements.
I suspect it's coming. You can get a 512g for $399 now. I'd wager a small amount that we'll see 1TB SSD in the next year - it may still be $800+ when they first arrive, but will come down more after that.
AFAICT, the OCZ Octane is the only 1TB 2.5" drive on the market, but I suspect you'll start to see more of these later this year (at slightly more sensible prices, one would hope).
$1270 for a drive that is essentially a RAID-0 of two 512GB SSDs, packaged in a single 2.5" drive. OCZ also has the Colossus drives that offer 1TB as a RAID-0.
Drive speeds have increase, capacity is slowly rising. Why compete on capacity when hdds still dominate it. But yes, this industry is still slow in it's evolution.
Can anyone provide more information on the nature of that limitation?
I did find reference to this article from 2 years ago: http://features.techworld.com/storage/3211959/is-nand-flash-...
It just seems like it's talking about the usual ongoing issue of, "we're near the cutting edge of lithography techniques", but that's kind of a given (you always build on the best lithography method that can scale, which tends to be fairly close to the best lithography method period). We've been in a similar state for CPU's and RAM for decades now. Is there something different about SSD's?