"Heroku was given access to updated source code which patched the vulnerability at the same time as other packagers. Because Heroku was especially vulnerable, the PostgreSQL Core Team worked with them both to secure their infrastructure and to use their deployment as a test-bed for the security patches, in order to verify that the security update did not break any application functionality. Heroku has a history both of working closely with community developers, and of testing experimental features in their PostgreSQL service."
I believe all the heroku hosted postgresql servers are externally accessible and there's no way to filter access by IP.
Of course hindsight is always 20:20, but perhaps it's a good idea for heroku to consider adding some basic (optional) firewall layer to allow customers to control who can connect to the hosted db?
Disclaimer: I'm not a heroku customer. I did however consider moving our pg's over to them a little while ago.
Well, I'll tell you why it is not implemented that way. I hope the restriction can be lifted some day. I am a member of the Heroku staff related to the matters at hand.
The problem is the sheer number of Heroku Runtime machines which are located in a smattering of IP space, and rapidly and accurately propagating the firewall rules required for tight network access control as applications churn around in there...even then, there have been some reports of voluminous firewall rules causing obscure problems. Of course: the world is an obscure place and yet we can deal with it in time. Such a thing could be hardened I'm sure, but the amount of bookkeeping required is a bit terrifying, and experience suggests that will not go entirely smoothly or be easy to find bugs in. At the time this was reasoned out (maybe about two years ago?), it wasn't even widely known that Heroku offered any data storage service of significance. Early days.
So, the simple approach is to enable access from the entire Heroku Runtime layer. But who can put applications there? Anybody on the Internet. That's why the 'ingress' feature to poke a temporary hole in the database firewall was dropped as too marginal given the fairly severe inconvenience of it all -- it had the feel of a weird Heroku-ism, especially in light of the lack of attacks using unauthenticated clients on Postgres, until this date. In addition, what about all the other addons? There would need to be an API, and because of the nature of what it is doing (poking holes before beginning, say, TLS negotiation which requires even more round trips and is slow enough as it is) it would need to be stable, fast, accurate, and all that other good stuff, lest all addons effectively be rendered offline all at once.
Other more application-level approaches are possible (like tunneling all connections to a local unix socket, or something), but that's but a little strange, because it requires code injection of strange stuff into the running container, makes your URLs look funny, and so on. This model has been experimented with by some of my colleagues and staff of other, similar firms, and definitely has its attractive sides. Nevertheless, one of the general guidelines in our implementation choices is to not be too weird for someone moving on or off the service. These approaches are also not in and of themselves immune to DoS or security problems...they need careful auditing. Maybe another look is indicated. And again: what about every other addon? What about your local computer? Are you going to install some weird agent, open source or no, from every such addon?
My personal favorite pie in the sky option right now would be cordon off a slice of contiguous, publicly addressable (but not publicly accessible) IP space so that firewall rules could remain compact and slower changing, and also involve your local computer: imagine being able to VPN to two or three such networks simultaneously, because their addressing does not alias at all. But this is still in the realm of fantasy, and and probably would require looking at IPv6 to be able to segment the address space in a sane manner, which compounds another layer of stuff that can be buggy, even in such mundane details as address parsing.
So....there's some rambling, but I thought it perhaps useful to talk about some of the challenges here to motivate the discussion.
Security Groups work across accounts, so Heroku (or whoever) could let you provide your account ID and Security Group name, then authorise access from this group.
yes, when I did my little research for hosted PG, heroku was pretty much the only viable option. That said, I did come across some difficulties running `rake spec` against the heroku hosted db (since you can't drop the database, only individual tables). This was giving me some (unrelated) headache.
Another thing I was really hoping for but couldn't find with heroku, was being able to do a point-in-time-restores via the heroku web/cli interface. This would be a seriously nice feature if something like that was available...
Product manager of Heroku Postgres here; if you specifically need this functionality around point in time restores you should reach out to us. Would love to hear more around the use cases behind it.
Thanks Craig. I might do that, but I think we're too small fish for any kind of a bespoke solution. Given that you guys came up with WAL-E, I was secretly hoping this was somehow baked-into some magical heroku interface already...
I didn't say I want it for free. I'm just not a big-enough customer with deep enough pockets to have some customized solution built especially for me by heroku.
It doesn't mean that other people like me wouldn't be interested in something like this if it existed.
I believe all the heroku hosted postgresql servers are externally accessible and there's no way to filter access by IP.
Of course hindsight is always 20:20, but perhaps it's a good idea for heroku to consider adding some basic (optional) firewall layer to allow customers to control who can connect to the hosted db?
Disclaimer: I'm not a heroku customer. I did however consider moving our pg's over to them a little while ago.