You are forgetting to price in some minor features that Aurora provides:
- Aurora's storage is spread across three availability zones.
- Backups.
- Automatic failover.
- No need to configure anything, it just works.
If your time is free, and you don't actually need anything resembling high availability for the data in the database, then that's a good price comparison. I'm not arguing that managed databases makes sense for everybody, but if you're doing a price comparison then at least factor in multi-site redundancy for the data?
> You are forgetting to price in some minor features that Aurora provides
That's true and fair, although in both directions; skimming the docs it looks like aurora prices include 2 replicas? But backups aren't free (to store), bandwidth isn't free, and iops aren't free. Also, my difficulty in figuring out a fair pricing comparison highlights another point: a dedicated server has a fixed price. Other than more servers for more instances/replicas, you're never going to pay more, and even then it's a simple "adding another replica will increase our costs to X*(N+1) per month", not a "scaling out will add X to our costs, but if we use more I/O than expected we'll add Y to our costs, and exporting data will cost Z in bandwidth".
It's because of thoughts like this that the number of tags per item is limited.
Initially they were going to allow unlimited tags until someone pointed out that you could use tags on your items as a poor mans key/value store for free.
I'm reminded of people who use S3 as an eventually-consistent database. Encode your rows in CSV and use them as names of empty objects. Query using paginated LIST requests for a prefix of the columns.
Letsrun is unfortunately a very toxic forum, but also has loads of knowledgeable folks. It's like reading a wikipedia article with commentary from 4chan on the side.
Dealing with the backscatter from CSV misunderstandings can be fairly challenging - for a lot of us, the customer experience is improved by being as accommodating as possible instead of correct. We at Intercom released a Ruby CSV parser that "is a ridiculously tolerant and liberal parser which aims to yield as much usable data as possible out of such real-world CSVs".
Seeing how the debugging process actually went, though, is also useful at times, to give you more examples of how other people work through problems where the cause is not clear.
And it allows for reflection: How would our troubleshooting and alerting handle this?
We had a similar haproxy session saturation problem a couple of month ago. By now, our alerting would pick this up within a minute and trigger alerts. Those include a runbook to resolve it. Our standard resolution would fail in a case like this, but I'm pretty sure we'd solve it in a second iteration.
If your time is free, and you don't actually need anything resembling high availability for the data in the database, then that's a good price comparison. I'm not arguing that managed databases makes sense for everybody, but if you're doing a price comparison then at least factor in multi-site redundancy for the data?