Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

A few questions:

a) How do you prevent technical debt? It seems to be more difficult due to APIs which shouldn't have breaking changes. In theory you could always version up the APIs and serve both versions or just add a new API for a breaking change, but these solutions seems awkward.

b) How do you start developing multiple microservices at the same time? I would expect APIs to change a lot in the beginning, which would mean that updating one microservice would break another. Perhaps that is acceptable before the first "stable release" of a microservice.



This is basically it. Designing a game? Build the Game service, with all your game logic. Need users and authentication now? Start writing an Identity service, and so on.

The only difference is that instead of starting to write some Identity class and use it in your Game service, you write some Identity class and expose it via a REST API, and then provide an interface library that interfaces with that REST API. Call it IdentityInterface or libidentity or something. Pydentity, whatever. It makes an HTTP request, gets a serialized object, unserializes it, and returns it.

For simplicity, put all your public models in that library, and it gets shared by both the Identity service and the Game service. Those models represent an object and what you can do with it. In the Identity service is where all of that actually happens.

This is also how you solve the 'multiple microservices at the same time' problem; your interface library provides the public interface, and the backend REST API is the 'private' API used by the public interface. You make changes to the backend API and the public library and no one notices, or you make incompatible changes to the public service and fix everything before you deploy; ideally, you add new APIs, migrate services over, then deprecate the old ones.

In the end, each service sees the world as fundamentally the same; there's a library with classes and functionality, and you use that to do things. If you design it right, it's never obvious from your code that you're accessing a different service elsewhere in your infrastructure.


The problem is that microservices are not objects. They leak reality into your problem domain in a way that simply cannot be made to go away.

If regular object oriented programming languages had method calls that randomly failed, were delayed, sent multiple copies of a response, changed how they behaved without warning, sent half-formed responses ... then yes it would be the same.

Distributed systems are hard, because you cannot change things in two places simultaneously. All synchronisation is limited by the bits you can push down a channel up to, but not exceeding, the speed of light. In a single computer system this problem can be hidden from the programmer. In a distributed system, it cannot.

Probably the most devastating critique of the position that "it's just OO modeling!" came in A Note on Distributed Computing, published in 1994 by Waldo, Wyant, Wollrath and Kendall[0]:

"We look at a number of distributed systems that have attempted to paper over the distinction between local and remote objects, and show that such systems fail to support basic requirements of robustness and reliability. These failures have been masked in the past by the small size of the distributed systems that have been built. In the enterprise-wide distributed systems foreseen in the near future, however, such a masking will be impossible."

[0] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7...


One of the things I've heard about Erlang is that its processes can be distributed very easily. The paper you cite was written four years before Erlang was open-sourced; I wonder if Erlang/OTP would hold up to their analysis.


If you look, one of three strategies they examine is "treat all calls as remote", which is the approach taken in Erlang.


Honestly, I've been following along with these trends a lot and it seems like the answers for 1&2 come down to team size. If you have a bigger team, with more development inertia microservices can seem amazing and the tradeoffs are worth it (repeated work vs. development tempo).

To a small team that doesn't have the inertial issues to generate the benefits of microservices, it seems like they are nice in theory but have too much overhead to supplant monolithic approaches.


> How do you start developing multiple microservices at the same time?

Same as any other project: Develop from the outside in.

In practice, trying to develop in the "optimal order' leads to speculative development that will be wasted.


Nothing prevents you from having multiple microservices sharing the same codebase.

That said, the "it's just like the web" model doesn't sound fantastic to me. It sounds like your app now depends on contracts which are only enforced by good practices, not by something strongly typed you can check at compile time, unless you use something like protocol buffers to generate the boilerplate.


This is where the test-driven world, which in my experience is strongest on the dynamic language side of programming, has come back around full circle.

In microservices, everything is dynamically typed.

There is no single binary produced by a single compiler performing whole-program checks of consistency. Even tools like protobufs don't help when code bases drift, or someone introduces a foreign tool, or someone upgrades versions and introduces a subtle mismatch, or some doesn't know you call their service and shuts it down ...

Turns out that driving from tests, and starting those tests from the outermost consumer, is a fairly well-proved way of coping with such conditions.


> There is no single binary produced by a single compiler performing whole-program checks of consistency. Even tools like protobufs don't help when code bases drift, or someone introduces a foreign tool, or someone upgrades versions and introduces a subtle mismatch, or some doesn't know you call their service and shuts it down ...

Static typing is not a panacea, but large codebase plus dynamic typing everywhere sounds like a recipe for disaster. No matter the amount of testing.

> Turns out that driving from tests, and starting those tests from the outermost consumer, is a fairly well-proved way of coping with such conditions.

You need tests no matter what. However, static typing means a much greater confidence in your codebase.


As soon as you distribute your system, you have dynamic typing, whether you like it or not.

At runtime you are inspecting incoming messages and then routing them to code. It doesn't matter what language the code is written in, it will need to route and validate the messages at runtime.

The type system cannot provide compile-time assurances of behaviour, because it cannot create a single consistent binary which enforces the guarantees.

Your only remaining tool is to drive code from tests and only from tests.


> As soon as you distribute your system, you have dynamic typing, whether you like it or not.

You have serialization/deserialization issues. You can still type your messages.

> At runtime you are inspecting incoming messages and then routing them to code. It doesn't matter what language the code is written in, it will need to route and validate the messages at runtime.

Of course.

> The type system cannot provide compile-time assurances of behaviour, because it cannot create a single consistent binary which enforces the guarantees.

If you make the assumption that you deploy up-to-date binaries, then knowing at compile time that your producer and consumer use the same data structure for the messages they exchange would give me much better confidence than "it looks like the API conforms to what's written on the wiki".


> You can still type your messages.

You can hope that they respect the type. For a robust distributed system, you will have to check everything at runtime.

> If you make the assumption that you deploy up-to-date binaries, then knowing at compile time that your producer and consumer use the same data structure for the messages they exchange would give me much better confidence than "it looks like the API conforms to what's written on the wiki".

My reading is that we agree that running code is the only source of truth, we disagree on what guarantees distribution deprives us of.


If you cannot ensure that your producer receives messages following a certain schema, even though you enforce it statically in your codebase, you also cannot ensure that your running code passes your tests.


Which is why I start from integration testing of the whole system, with frenemy tests for any foreign services that I must rely on.

You're right that tests don't make Byzantine failures go away. But neither do static types. My point that distribution turns all systems into analogies for dynamic language programming remains, and so the emphasis on tool support changes along with it.


This reminded me of the AngularJS team deciding to go with (optional) runtime type checking over compile-time checking (which is what TypeScript has done to JavaScript). Their reasoning was that you can use runtime checking for REST responses, which can be argued to somewhat reduce the need for writing tests.


Most systems I've seen these days don't have any compile-time type checking, since they're all written in Ruby, Python, or Node.js.

In the normal case of development, you tend to have a broken-out system. For a game for example:

    + Game code
    --+ User authentication classes/functionality (which accesses DB)
    --+ Messaging classes/functionality (which accesses DB)
    --+ User metrics classes/functionality (which accesses DB)
In the new design you'd have this:

    + Game code
    --+ User authentication classes/functionality (which accesses REST service)
    --+ Messaging classes/functionality (which accesses REST service)
    --+ User metrics classes/functionality (which accesses REST service)
In other words, in a clean design, your Game code is accessing a library which provides user Authentication functionality, one which provides Messaging functionality, and one which provides Metrics functionality.

In this new design, you have exactly the same thing - a library which abstracts the details of communicating with the service, encoding data, etc. A person making changes to those libraries, which other services use, is responsible for either not making backwards-incompatible changes, or, when that isn't possible, working with other teams to ensure a clean upgrade path (or doing it themselves, if your lines are sufficiently blurred).


> In this new design, you have exactly the same thing - a library which abstracts the details of communicating with the service, encoding data, etc. A person making changes to those libraries, which other services use, is responsible for either not making backwards-incompatible changes, or, when that isn't possible, working with other teams to ensure a clean upgrade path (or doing it themselves, if your lines are sufficiently blurred).

The new design trades a "modular but monolithic" design for complexity and brittleness, IMHO. The ability to spin up new instances of a given service on demand is interesting, but it sure sounds like reinventing Erlang without Erlang's tooling.



I really wish the team I'm on could understand this!




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: