Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

A few people in the comments are saying 'isn't better to just setup your own SQL server instead of RDS?' and similar. I don't want to post a reply to each, so I will say it here.

While I can totally sympathize from a programmer point of view (setting up, tweaking stuff and all is a great fun), but you need to ask yourself whether it is in the interests of business to do so. Especially if you're working in a small team with no dedicated infrastructure staff or a startup with a short runway and a lot of urgent user facing changes.

Doing something on your own (e.g. setting up your own alternative to S3, or configuring your own SQL servers), comes with a cost and it's not only the programming/initial setup time. It's also opportunity cost (instead of setting up a server, I could, for example, analyze some user data); maintenance (more things to worry about which you can outsource); skills set required to run the infrastructure (running your own SQL cluster requires more knowledge, more training than running one on RDS), etc.

So is it in the interest of the business to run your own infrastructure?

If you have thousands of servers and spending millions on it - probably, but then probably you can make an attractive deal with GCE or AWS :)

If your application needs some complex performance related stuff which is harder to do in the cloud (e.g. some custom hardware or whatever), then again, running your own infrastructure might be better.

But if you are like the majority of the companies/products (you just need infrastructure to run reliably and performance should be just good enough), using AWS and friends might make a big difference.



If you come to the conclusion that the opportunity cost is too high for your team/company, fine...I can believe it....as long as you're also weighing the benefit of learning. All learning has an opportunity cost.

I do believe that someone who can setup and configure nginx to do load balancing, caching, rate limiting and build middlewares in LUA is potentially going to be a more productive full stack developer than someone who can't. As a programmer, having managed postgresql, has made me a more effective programmer. I have a good understanding of how it vacuum and collects statistics, the relationship between connections and sorts and work_mem, so I'm better equipped at writing queries and troubleshooting issues.

The gain to me personally and to my employer (and future employers) is not trivial.


This reminds me of recent discussions about how countries that outsource their manufacturing quickly lose knowledge of manufacturing technology and fall behind in innovation and self-reliance. I don't think we are there yet, but it is conceivable that in the future system administration could become a lost art to many. Something to consider as more of our infrastructure needs are met by the cloud. I'm sure I'm overstating this, but I figured I'd share anyway


It's true that you gain some technical knowledge by learning all that. The question is whether that is worth the opportunity cost of not learning other things or doing something else during that time.

Every minute you spend studying nginx config files or adjusting work_mem is a minute you are not spending working on your actual product or app.

To some extent it is indeed beneficial to know how all that stuff works. But at some point your time is better spent working on things that differentiate your specific company from others, rather than spending it on twiddling the same set of knobs that everyone else has.


Great point and I totally agree with you. Treating everything like a black box that just shove money into isn't going to take you far either :)


The biggest advantage that makes RDS and friends (DynamoDB, Redshift) so extremely attractive is that it (almost) completely offloads operating system and database server responsibilities onto Amazon. While there are some workloads that legitimately require physical hardware to execute on, there are a LOT of costs associated with that decision (networking, storage, power, hosting, licensing, etc.) that people tend to forget in cost comparisons.

I've worked with hardware for several years and love the challenges that come with it, but in most cases, the cost of debugging those problems and working with vendors to get fixes/replacements down isn't worth it. It's a HUGE time suck. I've been working with AWS for the last few months, and being able to safely dispose of a bad instance and spin up whatever underlying services that ran on it on something else is amazing.

If I were in charge of building infrastructure at an early-stage startup (or even one that's at a later-stage), I would absolutely 100% start with AWS and really squeeze every bit of performance out of the code for their products before thinking about going physical.


You still have to tune the configuration of RDS instances, and they can make some administrative tasks more difficult because they don't grant SUPER and/or don't allow system-level access. For example, you can't use innobackupex/xtrabackup on an RDS instance.

I appreciate that it is often worthwhile to just pay someone else to handle things for your company, but I think people are too quick to flip the switch one way or the other; they either run everything "in the cloud" or nothing there. The truth is that as with most things, there is a happy medium tailored to each company's specific circumstances.

AWS is very expensive. People underestimate how expensive it is. When we switched to AWS, our monthly bill was about 70% the total cost to buy (not rent, buy) all the machines in our old bare-metal datacenter, which were still perfectly performant (we switched because the execs wanted to be super-duper cool cloud users, not because of any real technical limitation, though there are pros and cons to it).

I have to think this is fairly common. I hear a popular trick to net a big savings in your first year as CFO is to just force the tech guys to cut out Amazon and replace it with a cheaper cloud provider or colocated bare-metal.


AWS is a platform. If all you want is a VPS, it's overpriced and underperformant (or so I hear).

If you just took a load off of bare metal and moved it to AWS, I don't doubt it got significantly more expensive... but that's a terrible mis-use of the platform.

When you develop an app around the products they offer, it starts to look a lot more reasonable. Especially at a small scale.

When I need a durable messaging service in my app, I don't need to go read up on rabbitmq, find a couple servers somewhere, set up some sort of redundant message queue so I get some level of durability, set up monitoring, deal with updates, debugging problems, etc. I just make some API calls to SQS and it's already there for me.

If "cut out Amazon and replace it with colocated bare-metal" is really that simple, then chances are you're not making very good use of the platform and you should absolutely switch off of it.


Right, I said that there wasn't really a technical reason for the move and that it was primarily motivated by political concerns. I'm not defending it.

I think you're right that a piecemeal approach is best. If you're using SQS because you identified it as a component that could be quickly and easily integrated at a lower cost than it would take to run apt-get install rabbitmq-server, then great. That may sound like a trivialization but for many companies the reality is that their "Linux guy" is a noob who can barely wield apt-get, and in these cases, something like SQS is indeed a good offering. In our case, it's definitely easier/cheaper/better to install RabbitMQ.

The ideal infrastructure combination is going to vary between companies based on the technical resources they have available and the technical requirements of their applications, but what I'm saying is that in general, there shouldn't be a default position of "all cloud" (super expensive, and also potentially time consuming) or "all metal" (super time consuming, and thus expensive).

I have to admit that there is something persistently annoying about your comment. I think it's the implication that if we switched everything to pure Amazon, it would somehow suddenly become a financially beneficial option. I think that is absurd and the only explanation for such a position is fanboyism.

Like I said, I'm sure that for some companies, money can be saved by using an appropriate combination of AWS services, but it shouldn't be taken as an implicit truth that it will apply to your circumstances, and that if it doesn't, you just need to further intertwine into Amazon's platform until it appears to becomes impractical to move off of it. I'm sure at that point you will be "saving money" by staying on Amazon, but only because you've created such a massive dependency on an external third-party for your company's operation that it'd take months to walk it back.

One last note. I do use an AWS service for my side project: Route53. It costs me less than $2/mo and provides a quick, easy, and powerful way to manipulate DNS records for my domains. For me, this makes a lot more sense than using the registrar's free DNS servers that take up to an hour to update or trying to run BIND myself. I'm completely open to using other AWS services when it makes sense.


The financial picture of AWS only makes sense if you are using it to reduce (either in absolute terms or in growth rate) your spend on IT staffing costs.

If you're paying 100% of your old IT staffing costs and paying the AWS bill, you're double paying part of it, IMO.


Sure, that's an argument that I'm sure crosses the threshold sometimes. However, with the amount of money we're paying AWS, we could've easily brought on several new full-time, dedicated guys to handle datacenter, hardware, colo, and sysadmin stuff and still come out ahead.

AWS is very expensive. People underestimate how expensive it is. I don't preclude the possibility that it's cost-effective for someone out there, but I seriously doubt that's the case for most of its users.


When our CEO told me we were moving to "the cloud," no amount of financial or technical explanation would convince him.

It was entirely about chasing VC dollars and converting CAPEX into OPEX. VCs hate CAPEX, and previous fundraising rounds at our previous venture highlighted our large CAPEX due to millions of dollars of physical hardware as a major turnoff.

So, I retired perfectly good 3-year-old servers that were fully amortized and replaced them with 4x as many instances, and brought up our monthly infrastructure spend by 10X.

10X! But at least now the VCs are happy.


AWS is much like renting a car. If you only need a car for a week out of the year, you're much better off renting one rather than buying one and leaving it idle for 51 weeks. But if you know you'll need a car every day, it's much cheaper to buy or long-term lease one rather than rent at the daily rate.

AWS is the daily-rate car rental. It's great when you have 1) highly variable load, and 2) your entire infrastructure is instrumented to scale up and down quickly with minimal manual intervention.

If, for example, you are running a large e-commerce site -- the case it was designed for -- AWS is great. Your load will be average most of the year, then go up a lot for Christmas shopping in November, then go up even more in December... then load goes back down to average for the remaining 10 months.

Part of the issue is also cultural. Younger developers who grew up hearing "cloud is awesome" every day think the very idea of colocating hardware is archaic and ridiculous, and won't even consider it.


Yeah, I agree with you. A good rollout would use bare-metal base hardware and scalable on-demand instances that can be auto-spawned and auto-decommissioned in the cloud.

However, I agree that the node.js generation is coming up believing that cloud is implicitly superior. Really I think this is a reaction to the fact that they don't know anything about system administration, either software or hardware, and they're trying to cover that up by claiming that anyone who uses real hardware is behind the times (in fact, they've even taken it to the extreme that anyone who uses a real server-side language is behind the times). It'd be great if we had more understanding of the marketing and political efforts in the tech world and had an entity that tried to counteract some of these fads before they got out of control. node.js itself would be a good target for such an entity.


Bringing on more people has additional hidden costs as well. HR, payroll, benefits/401K, management, staffing an on-call rotation for hardware issues (you still have to staff software on-call in either case), dealing with vacations, sickness, training, etc.

It's surprisingly expensive to run a full complement of web-facing IT services. Amazon (and Azure/GCE/RS/others) just make that calculus more explicit and in your face. (I also acknowledge that they are running AWS as a profitable and profit-seeking enterprise, so yes, they are marking things up beyond the lowest possible cost.)

For us, AWS is absolutely more expensive on a headline number basis. It's also decreased our latency of prod delivery and increased our agility as compared to on-prem solutions. We can get products in front of customers in hours/days that used to take weeks/months.


>Bringing on more people has additional hidden costs as well.

I'm including those costs in my estimation of how many people we could hire and still save money over using AWS. It's still several.


Bare metal is cheaper than cloud (not just AWS), sure. But you also have to know how to do bare metal right - you need the knowledge base, and that is a critical point. You don't want to be training up new developers in how to properly install and monitor raid cards, for example.

But if you only see AWS as "virtualised servers", you're probably not using them to the best you can.


This can definitely happen in some cases, but in plenty of other cases, moving off the cloud can give you an enormous cost savings _and_ dramatically improve your performance at the same time.

A coworker of mine was employee ~10 at a startup with some basic photogrammetry workloads. They ran jobs over ~0.1 to 1GB of images (lasting anywhere from minutes to hours) and operated primarily on Digital Ocean. They also had VPSes at AWS and Azure. My coworker helped them move this to LXC running on top of cheap last-gen dedicated servers. He built the entire (pretty simple) job management stack in about two weeks.

As a result, the company was able to scale to ~100x the previous number of jobs at roughly the same cost. The improvement was so dramatic that they were able to offer a free tier for their services - a business-model change enabled by cheap physical hardware.

In my own experience, I recently moved a production Postgres database off of Compose, who was charging us something like $400 a month for 2GB of RAM and 20GB of storage. I moved it to a hot-swap pair running in Rackspace on boxes that cost $2000 /month for the pair. That's 5x the cost. However, they each have 64GB of RAM and an 800GB SSD. The migration took me about two days total, including learning how to set up the hot-swap, and we have had very few operational issues with the database (knock on wood).

This decreased our mean page load times by an order of magnitude.

In some cases, dedicated hardware can really be worth the time cost.


I agree to that with respect to many AWS services more especially s3. However I find it hard to believe about RDS. I have spent thousands for dollars on AWS bills and never ever used RDS. Always setup my own db servers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: